WorldWideScience

Sample records for ethernet-based daq systems

  1. Cold front-end electronics and Ethernet-based DAQ systems for large LAr TPC readout

    CERN Document Server

    D.Autiero,; B.Carlus,; Y.Declais,; S.Gardien,; C.Girerd,; J.Marteau; H.Mathez

    2010-01-01

    Large LAr TPCs are among the most powerful detectors to address open problems in particle and astro-particle physics, such as CP violation in leptonic sector, neutrino properties and their astrophysical implications, proton decay search etc. The scale of such detectors implies severe constraints on their readout and DAQ system. We are carrying on a R&D in electronics on a complete readout chain including an ASIC located close to the collecting planes in the argon gas phase and a DAQ system based on smart Ethernet sensors implemented in a µTCA standard. The choice of the latter standard is motivated by the similarity in the constraints with those existing in Network Telecommunication Industry. We also developed a synchronization scheme developed from the IEEE1588 standard integrated by the use of the recovered clock from the Gigabit link

  2. ETHERNET BASED EMBEDDED SYSTEM FOR FEL DIAGNOSTICS AND CONTROLS

    International Nuclear Information System (INIS)

    Jianxun Yan; Daniel Sexton; Steven Moore; Albert Grippo; Kevin Jordan

    2006-01-01

    An Ethernet based embedded system has been developed to upgrade the Beam Viewer and Beam Position Monitor (BPM) systems within the free-electron laser (FEL) project at Jefferson Lab. The embedded microcontroller was mounted on the front-end I/O cards with software packages such as Experimental Physics and Industrial Control System (EPICS) and Real Time Executive for Multiprocessor System (RTEMS) running as an Input/Output Controller (IOC). By cross compiling with the EPICS, the RTEMS kernel, IOC device supports, and databases all of these can be downloaded into the microcontroller. The first version of the BPM electronics based on the embedded controller was built and is currently running in our FEL system. The new version of BPM that will use a Single Board IOC (SBIOC), which integrates with an Field Programming Gate Array (FPGA) and a ColdFire embedded microcontroller, is presently under development. The new system has the features of a low cost IOC, an open source real-time operating system, plug and play-like ease of installation and flexibility, and provides a much more localized solution

  3. DAQ

    CERN Multimedia

    F. Meijers

    2010-01-01

     The DAQ system (see Figure 2) consists of: - the full detector read-out of a total of 633 FEDs (Front-End Drivers) – the FRL (Front-end Readout - Link) provides the common interface between the sub-detector specific FEDs and the central DAQ; - 8 DAQ slices with a 100 GB/s event building capacity – corresponding to a nominal 2 kB per FRL at a Level-1 (L1) trigger rate of 100 kHz; - an event filter to run the HLT (High Level Trigger) comprising 720 PCs with two quad-core 2.6 GHz CPUs; - a 16-node storage manager system allowing a writing rate that exceeds 1 GB/s, with concurrent transfers to Tier 0 at the same rate, and a total storage capacity of 250 TB. It also forwards events to the online DQM (Data Quality Monitoring). Figure 2: The CMS DAQ system The DAQ system for the 2010 physics runs The DAQ system has been deployed for pp and heavy-ion physics data-taking. It can be easily ...

  4. The LHCb DAQ system

    CERN Document Server

    Jost, B

    2000-01-01

    The LHCb experiment is the most recently approved of the 4 experiments under construction at CERN's LHC accelerator. It is a special purpose experiment designed to precisely measure the CP violation parameters in the B-B system. Triggering poses special problems since the interesting events containing B-mesons are immersed in a large background of inelastic p-p reactions. We therefore decided to implement a 4 level triggering scheme. The LHCb Data Acquisition (DAQ) system will have to cope with an average trigger rate of similar to 40 kHz, after two levels of hardware triggers, and an average event size of similar to 150 kB. Thus an event-building network which can sustain an average bandwidth of 6 GB /s is required. A powerful software trigger farm will have to be installed to reduce the rate from the 40 kHz to similar to 200 Hz of events written to permanent storage. In this paper we will concentrate on the networking aspects of the LHCb data acquisition and the controls system. 11 Refs.

  5. DAQ

    CERN Multimedia

    F. Meijers.

    The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing a writing rate up to 2 GByte/s and a total capacity of 250 TBytes. Operation: The DAQ system has been successfully deployed to capture the first LHC collisions. Here trigger rates were typically in the range 1 – 11 kHz. The DAQ system serviced global cosmics and commissioning data taking. Here typically data were taken with ~1 kHz cosmic trigger rate and raw event size of ~500 kByte. Often an additional ~100 kHz of random triggers were mixed, which were pre-scaled for storage, to stress test the overall system. Operational procedures for DAQ shifters and on-call experts have been consolidated. Throughout 2009, the online cluster, the production online Oracle database, and the central Detector Control System (DCS) have been operational 24/7. A development and integration database has been ...

  6. DAQ

    CERN Multimedia

    J.A. Coarasa Perez

    Event Builder One of the key design features of CMS is the large Central Data Acquisition System capable of bringing over 100 GB of data to the surface and building 100,000 events every second. This very large DAQ system is ex¬pected to give CMS a competitive advantage since we can have a very flexible High Level Trigger entirely run¬ning on standard computer processors. The first stage of what will be the largest DAQ system in the world is now being commissioned at Point 5. While the detector has been read out until now by a small system called the mini-DAQ, the large central DAQ Event Builder has been put together and debugged over the last 4 months. During the month of September, the full system from FED (front end connection to the detector readout) to Filter Unit is being commissioned and we hope to use the central DAQ Event Builder for the Global Run at the end of September. The first batch of 400 computers arrived around in mid-April. These computers became Readout Units (RUs), wit...

  7. DAQ

    CERN Multimedia

    F. Meijers

    2011-01-01

    The DAQ system (see Figure 2) consists of: – the full detector read-out of a total of 633 FEDs (front-end drivers). The FRL (front-end readout link) provides the common interface between the sub-detector specific FEDs and the central DAQ; – 8 DAQ slices with a 100 GB/s event building capacity – corresponding to a nominal 2 kB per FRL at a Level-1 trigger rate of 100 kHz; – an event filter to run the HLT (High Level Trigger) composing 720 PCs with two quad-core 2.6 GHz CPUs; – a 16-node storage manager system allowing a writing rate that exceeds 1 GB/s, with concurrent transfers to Tier 0 at the same rate, and a total storage capacity of 250 TB. It also forwards events to the online DQM (Data Quality Monitoring). Figure 2: The CMS DAQ system. The two-stage event builder assembles event fragments from typically eight front-ends located underground (USC) into one super-...

  8. DAQ

    CERN Multimedia

    F. Meijers and C. Schwick

    2010-01-01

    The DAQ system has been deployed for physics data taking as well as supporting global test and commissioning activities. In addition to 24/7 operations, activities addressing performance and functional improvements are ongoing. The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing up to 2 GByte/s writing rate and a total capacity of 250 TBytes. Operation The LHC delivered the highest luminosity in fills with 6-8 colliding bunches and reached peak luminosities of 1-2 1029/cm2/s. The DAQ was typically operating in those conditions with a ~15 kHz trigger rate, a raw event size of ~500 kByte, and a ~150 Hz recording of stream-A with a size of ~50 kB. The CPU load on the HLT was ~10%. Tests for Heavy-Ion operation Tests have been carried out to examine the situation for data-taking in the future Heavy Ion (HI) run. The high occupancy expected in HI run...

  9. DAQ

    CERN Multimedia

    F. Meijers

    2011-01-01

    Operation for the 2011 physics run For the 2011 run, the HLT farm has been extended with additional PCs comprising 288 system boards with two 6-core CPUs each. This brought the total HLT capacity from 5760 cores to 9216 cores and 18 TB of memory. It provides a capacity for HLT of about 100 ms/event (on a 2.7 GHz E5430 core) at 100 kHz L1 rate in pp collisions. All central DAQ nodes have been migrated to SLC5/64-bit kernel and 64-bit applications. The DAQ system has been deployed for pp physics data-taking in 2011 and performed with high efficiency (downtime for central DAQ was less than 1%). For pp physics data-taking, the DAQ was operating with a L1 trigger rate up to ~100 kHz and, typically, a raw event size of ~500 kB, and ~400 Hz recording of stream-A (which includes all physics triggers) with a size of ~250 kB after compression. The event size increases linearly with the pile-up, as expected. The CPU load on the HLT reached close to 100%, depending on L1 and HLT menus. By changing the L1 and HLT pre-...

  10. DAQ

    CERN Multimedia

    E. Meschi

    2013-01-01

    The File-based Filter Farm in the CMS DAQ MarkII The CMS DAQ system will be upgraded after LS1 in order to replace obsolete network equipment, use more homogeneous switching technologies, prepare the ground for future upgrade of the detector front-ends. The experiment parameters for the post-LS1 data taking remain similar to the ones of Run 1: a Level-1 aggregate rate of 100 kHz and an aggregate HLT output bandwidth of up to 2 GB/s. A moderate event-size increase is anticipated from increased pile-up and changes in the detector readout. For the output bandwidth, the figure of 2 GB/s is assumed. The original Filter Farm design has been successfully operated in 2010–2013 and its efficiency and fault tolerance brought to an excellent level. There are, however, a number of disadvantages in that design at the interface between the DAQ data flow and the High-Level Trigger that warrant a careful scrutiny in view of the deployment of DAQ2, after the LS1: The reduction of the number of RU bui...

  11. DAQ

    CERN Multimedia

    J. Hegeman

    2013-01-01

    The DAQ2 system for post-LS1 is a re-implementation of the central DAQ event data flow with the capability to read-out the majority of legacy back-end sub-detector electronics FEDs, as well as the new MicroTCA-based back-end electronics (see for example the previous (December 2012) issue of the CMS bulletin). A further upgrade in the DAQ and Trigger is the development of the new TCDS, outlined in the forthcoming Level-1 Trigger Upgrade TDR. The new TCDS (Trigger Control and Distribution System) Currently, CMS trigger control comprises three more-or-less separate systems. The Trigger Timing and Control (TTC) system distributes the L1A signals and synchronisation commands to all front-ends. The Trigger Throttling System (TTS) collects front-end readiness information and propagates those up to the central Trigger Control System (TCS). The TCS allows or vetoes Level-1 triggers from the Global Trigger (GT) based on the TTS state and on the trigger rules. These three systems will be combined in the new control ...

  12. DAQ

    CERN Multimedia

    F. Meijers

    2010-01-01

    The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing a writing rate up to 2 GByte/s and a total capacity of 250 TBytes. Operation Returning after the Christmas stop, the DAQ system serviced global cosmics and commissioning data taking. Typically data were taken with ~1 kHz cosmic trigger rate and raw event size of ~500 kByte. Often an additional ~100 kHz of random triggers were mixed, which were pre-scaled for storage, to stress test the overall system. The online cluster, the production online Oracle database, and the central Detector Control System (DCS) have been operational 24/7. Infrastructure Immediately after the Christmas break, the on-line data center has been into maximum heat production mode to stress the cooling infrastructure.  The maximum heat load produced in the room was about 570 kW. It appeared that the current settings ...

  13. DAQ

    CERN Document Server

    A. Racz

    The CMS DAQ installation status The year 2005 was dedicated to the production/test of the custom made electronic boards and the procurement of the commercial items needed to operate the underground part of the Data Acquisition System of CMS. The first half of 2006 was spent to install the DAQ infrastructures in USC55 (dedicated cable trays in the false floor) and to prepare the racks to receive the hardware elements. The second half of 2006 was dedicated to the installation of the CMS DAQ elements in the underground control. As a quick reminder, the underground part of the Data Acquisition System performs two tasks: a) Front End data collection and transmission to the online computing farm on the surface (SCX). b) Front End status collection and elaboration of a smart back pressure signal preventing the overflow of the Front End electronic. The hardware elements installed to perform these two tasks are the following:     500 FRL cards receiving the data of one or two sender...

  14. EPICS based DAQ system

    International Nuclear Information System (INIS)

    Cheng Weixing; Chen Yongzhong; Zhou Weimin; Ye Kairong; Liu Dekang

    2002-01-01

    EPICS is the most popular developing platform to build control system and beam diagnostic system in modern physics experiment facilities. An EPICS based data acquisition system was built in Redhat 6.2 operation system. The system is successfully used in the beam position monitor mapping, it improves the mapping process a lot

  15. The BELLE DAQ system

    Science.gov (United States)

    Suzuki, Soh Yamagata; Yamauchi, Masanori; Nakao, Mikihiko; Itoh, Ryosuke; Fujii, Hirofumi

    2000-10-01

    We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has five components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems.

  16. The BELLE DAQ system

    International Nuclear Information System (INIS)

    Suzuki, Soh Yamagata; Yamauchi, Masanori; Nakao, Mikihiko; Itoh, Ryosuke; Fujii, Hirofumi

    2000-01-01

    We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has five components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems

  17. A DAQ system for CAMAC controller CC/NET using DAQ-Middleware

    International Nuclear Information System (INIS)

    Inoue, E; Yasu, Y; Nakayoshi, K; Sendai, H

    2010-01-01

    DAQ-Middleware is a framework for the DAQ system which is based on RT-Middleware (Robot Technology Middleware) and dedicated to making DAQ systems. DAQ-Middleware has come into use as a one of the DAQ system framework for the next generation Particle Physics experiment at KEK in recent years. DAQ-Middleware comprises DAQ-Components with all necessary basic functions of the DAQ and is easily extensible. So, using DAQ-Middleware, you are able to construct easily your own DAQ system by combining these components. As an example, we have developed a DAQ system for a CC/NET [1] using DAQ-Middleware by the addition of GUI part and CAMAC readout part. The CC/NET, the CAMAC controller was developed to accomplish high speed read-out of CAMAC data. The basic design concept of CC/NET is to realize data taking through networks. So, it is consistent with the DAQ-Middleware concept. We show how it is convenient to use DAQ-Middleware.

  18. DAQ

    CERN Multimedia

    Frans Meijers

    2012-01-01

    Operations for the 2012 physics run For the 2012 run, the DAQ system operates typically at the start of a fill with a L1 Trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1 kHz recording of stream-A with a size of ~450 kB after compression. The stream-A includes the physics triggers and consists since 2012 of the “core” triggers and the “parked” triggers, at about equal rate. In order to be able to handle the higher instantaneous luminosities in 2012 (so far, up to 6.5E33 at 50 ns bunch spacing) with a pile-up of ~35 events, an extension of the HLT was installed, commissioned and is in operation since the start of data taking. Extension of the HLT farm The CMS event builder and High-Level Trigger (HLT) farm are built using standard commercial PCs and networking equipment and are therefore easily extendable with state-of-the-art hardware. The HLT farm has been extended twice so far, in May 2011 and recently in May 2012. Table 1 shows the parameters and...

  19. DAQ

    CERN Multimedia

    P. Schieferdecker

    ConfDB: CMS HLT Configuration Database The CMS High Level Trigger (HLT) is based on the CMSSW reconstruction framework and is therefore configured in much the same way as any offline or analysis job: by passing a document to the internal event processing machinery which is valid according to the CMSSW configuration grammar. For offline reconstruction or analysis, this document can be formatted as a text file or a Python script, which CMSSW can both interpret as to which specific software modules to load, which value to assign to each of their parameters, and in which succession to apply them to a given event. The configuration of the HLT is very complex: saving the most recent version of it into a single text file results in more than 8000 lines of instructions, amounting to more than 350kB in size. As for any other subsystem of the CMS data acquisition system (DAQ), the record of the state of the HLT during data-taking must be meticulously kept and archived. It is crucial that several versions of a part...

  20. Physics Requirements for the ALICE DAQ system

    CERN Document Server

    Vande Vyvre, P

    2000-01-01

    Abstract Abstract The goal of this note is to review the requirements for the DAQ system originated from the various physics topics that will be studied by the ALICE experiment. It summarises all the current requirements both for Pb-Pb and p-p interactions. The consequences in terms of throughput at different stages of the DAQ system are presented for different running scenarios.

  1. New methods to engineer and seamlessly reconfigure time triggered ethernet based systems during runtime based on the PROFINET IRT example

    CERN Document Server

    Wisniewski, Lukasz

    2017-01-01

    The objective of this dissertation is to design a concept that would allow to increase the flexibility of currently available Time Triggered Ethernet based (TTEB) systems, however, without affecting their performance and robustness. The main challenges are related to scheduling of time triggered communication that may take significant amount of time and has to be performed on a powerful platform. Additionally, the reliability has to be considered and kept on the required high level. Finally, the reconfiguration has to be optimally done without affecting the currently running system.

  2. DAQ

    CERN Multimedia

    F. Meijers

    2012-01-01

    The DAQ operated efficiently for the remainder of the pp 2012 run, where LHC reached a peak luminosity of 7.5E33 (at 50 ns bunch spacing). At the start of a fill, typical conditions are: an L1 trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1.5 kHz recording of stream-A with a size of ~500 kB after compression. The stream-A High Level Trigger (HLT) output includes the physics triggers and consists of the ‘core’ triggers and the ‘parked’ triggers, at about equal rate. Downtime due to central DAQ was below 1%. During the year, various improvements and enhancements have been implemented. An example is the introduction of the ‘action-matrix’ in run control. This matrix defines a small set of run modes linking a consistent set of configurations of sub-detector read-out configurations, and L1 and HLT settings as a function of LHC modes. This mechanism facilitates operation as it automatically proposes the run mode depending on the actual...

  3. Fault Tolerant Ethernet Based Network for Time Sensitive Applications in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Leos Bohac

    2013-01-01

    Full Text Available The paper analyses and experimentally verifies deployment of Ethernet based network technology to enable fault tolerant and timely exchange of data among a number of high voltage protective relays that use proprietary serial communication line to exchange data in real time on a state of its high voltage circuitry facilitating a fast protection switching in case of critical failures. The digital serial signal is first fetched into PCM multiplexer where it is mapped to the corresponding E1 (2 Mbit/s time division multiplexed signal. Subsequently, the resulting E1 frames are then packetized and sent through Ethernet control LAN to the opposite PCM demultiplexer where the same but reverse processing is done finally sending a signal into the opposite protective relay. The challenge of this setup is to assure very timely delivery of the control information between protective relays even in the cases of potential failures of Ethernet network itself. The tolerance of Ethernet network to faults is assured using widespread per VLAN Rapid Spanning Tree Protocol potentially extended by 1+1 PCM protection as a valuable option.

  4. Belle DAQ system upgrade at 2001

    CERN Document Server

    Suzuki, S Y; Kim, H W; Kim, H J; Kim, H O; Nakao, M; Won, E; Yamauchi, M

    2002-01-01

    We renewed the data acquisition system for the Belle experiment. Previous data acquisition system, which has been used since December 1998, did not have level 2 trigger facility. To improve the data reduction factor and total throughput, we replaced event builder, online computer farm and the storage system. The event builder and online computer farm are unified into one system. This event building farm uses commodity hardware and newly appended level 2 trigger functionality. This data acquisition system started its operation since last autumn and is very stable. We took 36 fb sup - sup 1 with new DAQ system, it had already overtaken 30 fb sup - sup 1 that is total amount of previous DAQ system.

  5. DAQ

    CERN Multimedia

    Gomez-Reino Garrido

    Rack Control In order to operate and monitor CMS detector, a large amount of electronic equipment is being installed in around five hundred racks. These racks, full of PCs and other industrial and custom electronic instruments, should be closely controlled and monitored on a full time basis. For this purpose, CMS has developed a Rack Control & Monitoring software application that is also used by the rest of LHC experiments. On the control side, this application interfaces the electrical distribution system allowing to power on and power off individual or groups of racks. For the rack environment monitoring part, the rack control software communicates with CERN made monitoring boards installed in every rack. These boards provide, among other information, temperature, humidity and air flow readings inside each rack. Some automated actions are performed by the tool to anticipate and, if possible, prevent safety system actions in the racks. Racks are automatically switched off if temperature or dew point r...

  6. The DoubleChooz DAQ systems.

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Double Chooz (DC) reactor anti-neutrino experiment consists of a neutrino detector and a large area Outer Veto detector. A custom data-acquisition (DAQ) system written in Ada language for all the sub-detector in the neutrino detector systems and a generic object oriented data acquisition system for the Outer Veto detector were developed. Generic object-oriented programming was also used to support several electronic systems to be readout providing a simple interface for any new electronics to be added given its dedicated driver. The core electronics of the experiment is based on FADC electronics (500MHz sampling rate), therefore a data-reduction scheme has been implemented to reduce the data volume per trigger. A dynamic data-format was created to allow dynamic reduction of each trigger before data is written to disk. The decision is based on low level information that determines the relevance of each trigger. The DAQ is structured internally into two types of processors: several read-out processors readi...

  7. IPbus A flexible Ethernet-based control system for xTCA hardware

    CERN Document Server

    Williams, Thomas Stephen

    2014-01-01

    The ATCA and uTCA standards include industry-standard data pathway technologies such as Gigabit Ethernet which can be used for control communication, but no specific hardware control protocol is defined. The IPbus suite of software and firmware implements a reliable high-performance control link for particle physics electronics, and has successfully replaced VME control in several large projects. In this paper, we outline the IPbus system architecture, and describe recent developments in the reliability, scalability and performance of IPbus systems, carried out in preparation for deployment of uTCA-based CMS upgrades before the LHC 2015 run. We also discuss plans for future development of the IPbus suite.SUMMARY IPbus will be used for controlling the uTCA electronics in the CMS HCAL, TCDS, Pixel and Level-1 trigger upgrades. IPbus control has already been extensively used in the work of these upgrade projects so far, and final uTCA systems will be deployed in the experiment starting from Autumn 2014. IPbus is...

  8. FAIR DAQ system: Performances and global DAQ management

    International Nuclear Information System (INIS)

    Ordine, A.; Boiano, A.; Zaghi, A.

    1997-01-01

    We present on overview of the features of FAIR (FAst Inter-crate Readout), a novel open-quotes plug-n-playclose quotes trigger and readout oriented bus system. It provides for an effective low-cost homogeneous, highly extendible and scalable, front-end environment. Readout and event-building are performed, at the same time, without the need of CPUs, by means of a transparent hardware level protocol. The measured rate of data transfer and event-building can be as fast as 22ns/longword (1.44 Gbit/s). The measured performances will be discussed. The open-quotes plug-n-playclose quotes feature will be also presented in some detail along with the control system based on a network embedded in the bus

  9. FPGAs for next gen DAQ and Computing systems at CERN

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The need for FPGAs in DAQ is a given, but newer systems needed to be designed to meet the substantial increase in data rate and the challenges that it brings. FPGAs are also power efficient computing devices. So the work also looks at accelerating HEP algorithms and integration of FPGAs with CPUs taking advantage of programming models like OpenCL. Other explorations involved using OpenCL to model a DAQ system.

  10. Development of a cost-effective and flexible vibration DAQ system for long-term continuous structural health monitoring

    Science.gov (United States)

    Nguyen, Theanh; Chan, Tommy H. T.; Thambiratnam, David P.; King, Les

    2015-12-01

    In the structural health monitoring (SHM) field, long-term continuous vibration-based monitoring is becoming increasingly popular as this could keep track of the health status of structures during their service lives. However, implementing such a system is not always feasible due to on-going conflicts between budget constraints and the need of sophisticated systems to monitor real-world structures under their demanding in-service conditions. To address this problem, this paper presents a comprehensive development of a cost-effective and flexible vibration DAQ system for long-term continuous SHM of a newly constructed institutional complex with a special focus on the main building. First, selections of sensor type and sensor positions are scrutinized to overcome adversities such as low-frequency and low-level vibration measurements. In order to economically tackle the sparse measurement problem, a cost-optimized Ethernet-based peripheral DAQ model is first adopted to form the system skeleton. A combination of a high-resolution timing coordination method based on the TCP/IP command communication medium and a periodic system resynchronization strategy is then proposed to synchronize data from multiple distributed DAQ units. The results of both experimental evaluations and experimental-numerical verifications show that the proposed DAQ system in general and the data synchronization solution in particular work well and they can provide a promising cost-effective and flexible alternative for use in real-world SHM projects. Finally, the paper demonstrates simple but effective ways to make use of the developed monitoring system for long-term continuous structural health evaluation as well as to use the instrumented building herein as a multi-purpose benchmark structure for studying not only practical SHM problems but also synchronization related issues.

  11. FASTBUS readout system for the CDF DAQ upgrade

    International Nuclear Information System (INIS)

    Andresen, J.; Areti, H.; Black, D.

    1993-11-01

    The Data Acquisition System (DAQ) at the Collider Detector at Fermilab is currently being upgraded to handle a minimum of 100 events/sec for an aggregate bandwidth that is at least 25 Mbytes/sec. The DAQ System is based on a commercial switching network that has interfaces to VME bus. The modules that readout the front end crates (FASTBUS and RABBIT) have to deliver the data to the VME bus based host adapters of the switch. This paper describes a readout system that has the required bandwidth while keeping the experiment dead time due to the readout to a minimum

  12. Concepts and technologies used in contemporary DAQ systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    based trigger processor and event building farms. We have also seen a shift from standard or proprietary bus systems used in event building to GigaBit networks and commodity components, such as PCs. With the advances in processing power, network throughput, and storage technologes, today's data rates in large experiments routinely reach hundreds of MegaBytes/s. We will present examples of contemporary DAQ systems from different experiments, try to identify or categorize new approaches, and will compare the performance and throughput of existing DAQ systems with the projected data rates of the LHC experiments to see how close we have come to accomplish these goals. We will also tr...

  13. A DAQ system for pixel detectors R and D

    International Nuclear Information System (INIS)

    Battaglia, M.; Bisello, D.; Contarato, D.; Giubilato, P.; Pantano, D.; Tessaro, M.

    2009-01-01

    Pixel detector R and D for HEP and imaging applications require an easily configurable and highly versatile DAQ system able to drive and read out many different chip designs in a transparent way, with different control logics and/or clock signals. An integrated, real-time data collection and analysis environment is essential to achieve fast and reliable detector characterization. We present a DAQ system developed to fulfill these specific needs, able to handle multiple devices at the same time while providing a convenient, ROOT based data display and online analysis environment.

  14. The Message Reporting System of the ATLAS DAQ System

    CERN Document Server

    Caprini, M; Kolos, S; 10th ICATPP Conference on Astroparticle, Particle, Space Physics, Detectors and Medical Physics Applications

    2008-01-01

    The Message Reporting System (MRS) in the ATLAS data acquisition system (DAQ) is one package of the Online Software which acts as a glue of various elements of DAQ, High Level Trigger (HLT) and Detector Control System (DCS). The aim of the MRS is to provide a facility which allows all software components in ATLAS to report messages to other components of the distributed DAQ system. The processes requiring a MRS are on one hand applications that report error conditions or information and on the other hand message processors that receive reported messages. A message reporting application can inject one or more messages into the MRS at any time. An application wishing to receive messages can subscribe to a message group according to defined criteria. The application receives messages that fulfill the subscription criteria when they are reported to MRS. The receiver message processing can consist of anything from simply logging the messages in a file/terminal to performing message analysis. The inter-process comm...

  15. Gated integrator PXI-DAQ system for Thomson scattering diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Kiran, E-mail: kkpatel@ipr.res.in; Pillai, Vishal; Singh, Neha; Thomas, Jinto; Kumar, Ajai

    2017-06-15

    Gated Integrator (GI) PXI based data acquisition (DAQ) system has been designed and developed for the ease of acquiring fast Thomson Scattered signals (∼50 ns pulse width). The DAQ system consists of in-house designed and developed GI modules and PXI-1405 chassis with several PXI-DAQ modules. The performance of the developed system has been validated during the SST-1 campaigns. The dynamic range of the GI module depends on the integrating capacitor (C{sub i}) and the modules have been calibrated using 12 pF and 27 pF integrating capacitors. The developed GI module based data acquisition system consists of sixty four channels for simultaneous sampling using eight PXI based digitization modules having eight channels per module. The error estimation and functional tests of this unit are carried out using standard source and also with the fast detectors used for Thomson scattering diagnostics. User friendly Graphical User Interface (GUI) has been developed using LabVIEW on Windows platform to control and acquire the Thomson scattering signal. A robust, easy to operate and maintain with low power consumption, having higher dynamic range with very good sensitivity and cost effective DAQ system is developed and tested for the SST-1 Thomson scattering diagnostics.

  16. SPHERE DAQ and off-line systems: implementation based on the qdpb system

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2003-01-01

    Design of the on-line data acquisition (DAQ) system for the SPHERE setup (LHE, JINR) is described. SPHERE DAQ is based on the qdpb (Data Processing with Branchpoints) system and configurable experimental data and CAMAC hardware representations. Implementation of the DAQ and off-line program code, depending on the SPHERE setup's hardware layout and experimental data contents, is explained as well as software modules specific for such implementation

  17. Orthos, an alarm system for the ALICE DAQ operations

    Science.gov (United States)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  18. Orthos, an alarm system for the ALICE DAQ operations

    International Nuclear Information System (INIS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; Von Haller, Barthelemy; Denes, Ervin

    2012-01-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  19. LHCb Silicon Tracker DAQ and DCS Online Systems

    CERN Multimedia

    Buechler, A; Rodriguez, P

    2009-01-01

    The LHCb experiment at the Large Hadron Collider (LHC) at CERN in Geneva Switzerland is specialized on precision measurements of b quark decays. The Silicon Tracker (ST) contributes a crucial part in tracking the particle trajectories and consists of two silicon micro-strip detectors, the Tracker Turicensis upstream of the LHCb magnet and the Inner Tracker downstream. The radiation and the magnetic field represent new challenges for the implementation of a Detector Control System (DCS) and the data acquisition (DAQ). The DAQ has to deal with more than 270K analog readout channels, 2K readout chips and real time DAQ at a rate of 1.1 MHz with data processing at TELL1 level. The TELL1 real time algorithms for clustering thresholds and other computations run on dedicated FPGAs that implement 13K configurable parameters per board, in total 1.17 K parameters for the ST. After data processing the total throughput amounts to about 6.4 Gbytes from an input data rate of around ~337 Gbytes per second. A finite state ma...

  20. DAQ system for high energy polarimeter at the LHE, JINR: implementation based on the qdpb (data processing with branchpoints) system

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2001-01-01

    Online data acquisition (DAQ) system's implementation for the High Energy Polarimeter (HEP) at the LHE, JINR is described. HEP DAQ is based on the qdpb system. Software modules specific for such implementation (HEP data and hardware dependent) are discussed

  1. DAQ system for low density plasma parameters measurement

    International Nuclear Information System (INIS)

    Joshi, Rashmi S.; Gupta, Suryakant B.

    2015-01-01

    In various cases where low density plasmas (number density ranges from 1E4 to 1E6 cm -3 ) exist for example, basic plasma studies or LEO space environment measurement of plasma parameters becomes very critical. Conventional tip (cylindrical) Langmuir probes often result into unstable measurements in such lower density plasma. Due to larger surface area, a spherical Langmuir probe is used to measure such lower plasma densities. Applying a sweep voltage signal to the probe and measuring current values corresponding to these voltages gives V-I characteristics of plasma which can be plotted on a digital storage oscilloscope. This plot is analyzed for calculating various plasma parameters. The aim of this paper is to measure plasma parameters using a spherical Langmuir probe and indigenously developed DAQ system. DAQ system consists of Keithley source-meter and a host system connected by a GPIB interface. An online plasma parameter diagnostic system is developed for measuring plasma properties for non-thermal plasma in vacuum. An algorithm is developed using LabVIEW platform. V-I characteristics of plasma are plotted with respect to different filament current values and different locations of Langmuir probe with reference to plasma source. V-I characteristics is also plotted for forward and reverse voltage sweep generated programmatically from the source meter. (author)

  2. Design of data transmission for a portable DAQ system

    International Nuclear Information System (INIS)

    Zhou Wenxiong; Nan Gangyang; Zhang Jianchuan; Wang Yanyu

    2014-01-01

    Field Programmable Gate Array (FPGA), combined with ARM (Advanced RISC Machines) is increasingly employed in the portable data acquisition (DAQ) system for nuclear experiments to reduce the system volume and achieve powerful and multifunctional capacity. High-speed data transmission between FPGA and ARM is one of the most challenging issues for system implementation. In this paper, we propose a method to realize the high-speed data transmission by using the FPGA to acquire massive data from FEE (Front-end electronics) and send it to the ARM whilst the ARM to transmit the data to the remote computer through the TCP/IP protocol for later process. This paper mainly introduces the interface design of the high-speed transmission method between the FPGA and the ARM, the transmission logic of the FPGA, and the program design of the ARM. The theoretical research shows that the maximal transmission speed between the FPGA and the ARM through this way can reach 50 MB/s. In a realistic nuclear physics experiment, this portable DAQ system achieved 2.2 MB/s data acquisition speed. (authors)

  3. High Performance Gigabit Ethernet Switches for DAQ Systems

    CERN Document Server

    Barczyk, Artur

    2005-01-01

    Commercially available high performance Gigabit Ethernet (GbE) switches are optimized mostly for Internet and standard LAN application traffic. DAQ systems on the other hand usually make use of very specific traffic patterns, with e.g. deterministic arrival times. Industry's accepted loss-less limit of 99.999% may be still unacceptably high for DAQ purposes, as e.g. in the case of the LHCb readout system. In addition, even switches passing this criteria under random traffic can show significantly higher loss rates if subject to our traffic pattern, mainly due to buffer memory limitations. We have evaluated the performance of several switches, ranging from "pizza-box" devices with 24 or 48 ports up to chassis based core switches in a test-bed capable to emulate realistic traffic patterns as expected in the readout system of our experiment. The results obtained in our tests have been used to refine and parametrize our packet level simulation of the complete LHCb readout network. In this paper we report on the...

  4. A verilog simulation of the CDF DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Schurecht, K.; Harris, R. (Fermi National Accelerator Lab., Batavia, IL (United States)); Sinervo, P.; Grindley, R. (Toronto Univ., ON (Canada). Dept. of Physics)

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system.

  5. A verilog simulation of the CDF DAQ system

    International Nuclear Information System (INIS)

    Schurecht, K.; Harris, R.; Sinervo, P.; Grindley, R.

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system

  6. Configurable data and CAMAC hardware representations for implementation of the SPHERE DAQ and offline systems

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2001-01-01

    An implementation of the experimental data configurable representation for using in the DAQ and offline systems of the SPHERE setup at the LHE, JINR is described. A software scheme of the SPHERE CAMAC hardware's configurable description, intended to online data acquisition (DAQ) implementation based on the qdpb system, is issued

  7. Embedded DAQ System Design for Temperature and Humidity Measurement

    Directory of Open Access Journals (Sweden)

    Tarique Rafique Memon

    2016-05-01

    Full Text Available In this work, we have proposed a cost effective DAQ (Data Acquisition system design useful for local industries by using user friendly LABVIEW (Laboratory Virtual Instrumentation Electronic Workbench. The proposed system can measure and control different industrial parameters which can be presented in graphical icon format. The system design is proposed for 8-channels, whereas tested and recorded for two parameters i.e. temperature and RH (Relative Humidity. Both parameters are set as per upper and lower limits and controlled using relays. Embedded system is developed using standard microcontroller to acquire and process the analog data and plug-in for further processing using serial interface with PC using LABVIEW. The designed system is capable of monitoring and recording the corresponding linkage between temperature and humidity in industrial unit's and indicates the abnormalities within the process and control those abnormalities through relays

  8. Embedded DAQ System Design for Temperature and Humidity Measurement

    International Nuclear Information System (INIS)

    Memon, T.R.

    2013-01-01

    In this work, we have proposed a cost effective DAQ (Data Acquisition) system design useful for local industries by using user friendly LABVIEW (Laboratory Virtual Instrumentation Electronic Workbench). The proposed system can measure and control different industrial parameters which can be presented in graphical icon format. The system design is proposed for 8-channels, whereas tested and recorded for two parameters i.e. temperature and RH (Relative Humidity). Both parameters are set as per upper and lower limits and controlled using relays. Embedded system is developed using standard microcontroller to acquire and process the analog data and plug-in for further processing using serial interface with PC using LABVIEW. The designed system is capable of monitoring and recording the corresponding linkage between temperature and humidity in industrial unit's and indicates the abnormalities within the process and control those abnormalities through relays. (author)

  9. Overview and performance of the FNAL KTeV DAQ system

    International Nuclear Information System (INIS)

    Nakaya, T.; O'Dell, V.; Hazumi, M.; Yamanaka, T.

    1995-11-01

    KTeV is a new fixed target experiment at Fermilab designed to study CP violation in the neutral kaon system. The KTeV Data Acquisition System (DAQ) is out of the highest performance DAQ's in the field of high energy physics. The sustained data throughput of the KTeV DAQ reaches 160 Mbytes/sec, and the available online level 3 processing power is 3600 Mips. In order to handle such high data throughput, the KTeV DAQ is designed around a memory matrix core where the data flow is divided and parallelized. In this paper, we present the architecture and test results of the KTeV DAQ system

  10. Development of multi-channel gated integrator and PXI-DAQ system for nuclear detector arrays

    International Nuclear Information System (INIS)

    Kong Jie; Su Hong; Chen Zhiqiang; Dong Chengfu; Qian Yi; Gao Shanshan; Zhou Chaoyang; Lu Wan; Ye Ruiping; Ma Junbing

    2010-01-01

    A multi-channel gated integrator and PXI based data acquisition system have been developed for nuclear detector arrays with hundreds of detector units. The multi-channel gated integrator can be controlled by a programmable GI controller. The PXI-DAQ system consists of NI PXI-1033 chassis with several PXI-DAQ cards. The system software has a user-friendly GUI which is written in C language using LabWindows/CVI under Windows XP operating system. The performance of the PXI-DAQ system is very reliable and capable of handling event rate up to 40 kHz.

  11. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  12. The DAQ system for the AEḡIS experiment

    Science.gov (United States)

    Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.

    2017-10-01

    In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.

  13. A prototype DAQ system for the ALICE experiment based on SCI

    International Nuclear Information System (INIS)

    Skaali, B.; Ingebrigtsen, L.; Wormald, D.; Polovnikov, S.; Roehrig, H.

    1998-01-01

    A prototype DAQ system for the ALICE/PHOS beam test an commissioning program is presented. The system has been taking data since August 1997, and represents one of the first applications of the Scalable Coherent Interface (SCI) as interconnect technology for an operational DAQ system. The front-end VMEbus address space is mapped directly from the DAQ computer memory space through SCI via PCI-SCI bridges. The DAQ computer is a commodity PC running the Linux operating system. The results of measurements of data transfer rate and latency for the PCI-SCI bridges in a PC-VMEbus SCI-configuration are presented. An optical SCI link based on the Motorola Optobus I data link is described

  14. Ethernet based data logger for gaseous detectors

    Science.gov (United States)

    Swain, S.; Sahu, P. K.; Sahu, S. K.

    2018-05-01

    A data logger is designed to monitor and record ambient parameters such as temperature, pressure and relative humidity along with gas flow rate as a function of time. These parameters are required for understanding the characteristics of gas-filled detectors such as Gas Electron Multiplier (GEM) and Multi-Wire Proportional Counter (MWPC). The data logger has different microcontrollers and has been interfaced to an ethernet port with a local LCD unit for displaying all measured parameters. In this article, the explanation of the data logger design, hardware, and software description of the master microcontroller and the DAQ system along with LabVIEW interface client program have been presented. We have implemented this device with GEM detector and displayed few preliminary results as a function of above parameters.

  15. Status of the Melbourne experimental particle physics DAQ, silicon hodoscope and readout systems

    International Nuclear Information System (INIS)

    Moorhead, G.F.

    1995-01-01

    This talk will present a brief review of the current status of the Melbourne Experimental Particle Physics group's primary data acquisition system (DAQ), the associated silicon hodoscope and trigger systems, and of the tests currently underway and foreseen. Simulations of the propagation of 106-Ru β particles through the system will also be shown

  16. Data Acquisition (DAQ) system dedicated for remote sensing applications on Unmanned Aerial Vehicles (UAV)

    Science.gov (United States)

    Keleshis, C.; Ioannou, S.; Vrekoussis, M.; Levin, Z.; Lange, M. A.

    2014-08-01

    Continuous advances in unmanned aerial vehicles (UAV) and the increased complexity of their applications raise the demand for improved data acquisition systems (DAQ). These improvements may comprise low power consumption, low volume and weight, robustness, modularity and capability to interface with various sensors and peripherals while maintaining the high sampling rates and processing speeds. Such a system has been designed and developed and is currently integrated on the Autonomous Flying Platforms for Atmospheric and Earth Surface Observations (APAESO/NEA-YΠOΔOMH/NEKΠ/0308/09) however, it can be easily adapted to any UAV or any other mobile vehicle. The system consists of a single-board computer with a dual-core processor, rugged surface-mount memory and storage device, analog and digital input-output ports and many other peripherals that enhance its connectivity with various sensors, imagers and on-board devices. The system is powered by a high efficiency power supply board. Additional boards such as frame-grabbers, differential global positioning system (DGPS) satellite receivers, general packet radio service (3G-4G-GPRS) modems for communication redundancy have been interfaced to the core system and are used whenever there is a mission need. The onboard DAQ system can be preprogrammed for automatic data acquisition or it can be remotely operated during the flight from the ground control station (GCS) using a graphical user interface (GUI) which has been developed and will also be presented in this paper. The unique design of the GUI and the DAQ system enables the synchronized acquisition of a variety of scientific and UAV flight data in a single core location. The new DAQ system and the GUI have been successfully utilized in several scientific UAV missions. In conclusion, the novel DAQ system provides the UAV and the remote-sensing community with a new tool capable of reliably acquiring, processing, storing and transmitting data from any sensor integrated

  17. DAQ system for testing RPC front-end electronics of the INO experiment

    International Nuclear Information System (INIS)

    Hari Prasad, K.; Sukhwani, Menka; Kesarkar, Tushar A.; Kumar, Sandeep; Chandratre, V.B.; Das, D.; Shinde, R.R.; Satyanarayana, B.

    2015-01-01

    The Resistive Plate Chamber (RPC) is the active detector element in the INO experiment. The in-house developed ANUSPARSH-III ASICs are being used as front-end electronics of the detector. The 2 m X 2 m RPC being used has 64-readout channels on X-side and 64-readout channels on Y-side. In order to test and validate the FE along with the RPC, a 64-channel DAQ system has been designed and developed. The detector parameters to be measured are noise rate, efficiency, hit pattern register and time resolution. The salient features of the DAQ system are: 64-channel LVDS receiver in FPGA, FPGA based parameter calculations and a micro controller for acquiring the processed data from FPGAs and sent through Ethernet and USB interfaces. The DAQ system consists of following parts: Two FPGAs each receiving 32 LVDS channels, FPGA firm-ware, micro controller firm-ware, Ethernet interface, embedded web server hosting data analysis software, USB interface, and Lab-windows based data analysis software. The DAQ system has been tested at TIFR with 1 m X 1 m RPC

  18. Ethernet-based test stand for a CAN network

    Science.gov (United States)

    Ziebinski, Adam; Cupek, Rafal; Drewniak, Marek

    2017-11-01

    This paper presents a test stand for the CAN-based systems that are used in automotive systems. The authors propose applying an Ethernet-based test system that supports the virtualisation of a CAN network. The proposed solution has many advantages compared to classical test beds that are based on dedicated CAN-PC interfaces: it allows the physical constraints associated with the number of interfaces that can be simultaneously connected to a tested system to be avoided, which enables the test time for parallel tests to be shortened; the high speed of Ethernet transmission allows for more frequent sampling of the messages that are transmitted by a CAN network (as the authors show in the experiment results section) and the cost of the proposed solution is much lower than the traditional lab-based dedicated CAN interfaces for PCs.

  19. Prototype system tests of the Belle II PXD DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Fleischer, Soeren; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David; Spruck, Bjoern [II. Physikalisches Institut, Justus-Liebig-Universitaet Giessen (Germany); Liu, Zhen' An; Xu, Hao; Zhao, Jingzhou [Institute of High Energy Physics, Chinese Academy of Sciences (China); Collaboration: II PXD Collaboration

    2012-07-01

    The data acquisition system for the Belle II DEPFET Pixel Vertex Detector (PXD) is designed to cope with a high input data rate of up to 21.6 GB/s. The main hardware component will be AdvancedTCA-based Compute Nodes (CN) equipped with Xilinx Virtex-5 FX70T FPGAs. The design for the third Compute Node generation was completed recently. The xTCA-compliant system features a carrier board and 4 AMC daughter boards. First test results of a prototype board will be presented, including tests of (a) The high-speed optical links used for data input, (b) The two 2 GB DDR2-chips on the board and (c) Output of data via ethernet, using UDP and TCP/IP with both hardware and software protocol stacks.

  20. Implementation of KoHLT-EB DAQ System using compact RIO with EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Dae-Sik; Kim, Suk-Kwon; Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    EPICS (Experimental Physics and Industrial Control System) is a collection of software tools collaboratively developed which can be integrated to provide a comprehensive and scalable control system. Currently there is an increase in use of such systems in large Physics experiments like KSTAR, ITER and DAIC (Daejeon Accelerator Ion Complex). The Korean heat load test facility (KoHLT-EB) was installed at KAERI. This facility is utilized for a qualification test of the plasma facing component (PFC) for the ITER first wall and DEMO divertor, and the thermo-hydraulic experiments. The existing data acquisition device was Agilent 34980A multifunction switch and measurement unit and controlled by Agilent VEE. In the present paper, we report the EPICS based newly upgraded KoHLT-EB DAQ system which is the advanced data acquisition system using FPGA-based reconfigurable DAQ devices like compact RIO. The operator interface of KoHLT-EB DAQ system is composed of Control-System Studio (CSS) and another server is able to archive the related data using the standalone archive tool and the archiveviewer can retrieve that data at any time in the infra-network.

  1. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    International Nuclear Information System (INIS)

    Quinto, M; Cafagna, F; Fiergolski, A; Radicioni, E

    2013-01-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

  2. Development of BPM/BLM DAQ System for KOMAC Beam Line

    Energy Technology Data Exchange (ETDEWEB)

    Song, Young-Gi; Kim, Jae-Ha; Yun, Sang-Pil; Kim, Han-Sung; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Gyeongju (Korea, Republic of)

    2016-10-15

    The proton beam is accelerated from 3 MeV to 100 MeV through 11 DTL tanks. The KOMAC installed 10 beam lines, 5 for 20-MeV beams and 5 for 100-MeV beams. The proton beam is transmitted to two target room. The KOMAC has been operating two beam lines, one for 20 MeV and one for 100 MeV. New beam line, RI beam line is under commissioning. A Data Acquisition (DAQ) system is essential to monitor beam signals in an analog front-end circuitry from BPM and BLM at beam lines. A data acquisition (DAQ) system is essential to monitor beam signals in an analog front-end circuitry from BPM and BLM at beam lines. The DAQ digitizes beam signal and the sampling is synchronized with a reference signal which is an external trigger for beam operation. The digitized data is accessible by the Experimental Physics and Industrial Control System (EPICS)-based control system, which manages the whole accelerator control. The beam monitoring system integrates BLM and BPM signals into the control system and offers realtime data to operators. The IOC, which is implemented with Linux and a PCI driver, supports data acquisition as a very flexible solution.

  3. The DAQ system of OPERA experiment and its specifications for the spectrometers

    International Nuclear Information System (INIS)

    Dusini, S.; Barichello, G.; Dal Corso, F.; Felici, G.; Lindozzi, M.; Stalio, S.; Sorrentino, G.

    2004-01-01

    We present an overview of the data acquisition system (DAQ) and event building of OPERA. OPERA is a long baseline neutrino experiment with a high modularity detector and low event rate. To deal with these features a distributed DAQ system base on Ethernet standards for the data transfer has been chosen. A distributed GPS clock signal is used for synchronizations and time stamp of the data. This architecture allows very modular and flexible event building based on a software trigger strategy. We also present its specific application to the spectrometer sub-detector where RPC trackers are installed. Self-triggerability is a dedicated feature to be also sensitive to out-of-spill events and to possibly allow data taking before the official start of the experiment

  4. Web-based DAQ systems: connecting the user and electronics front-ends

    Science.gov (United States)

    Lenzi, Thomas

    2016-12-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  5. Web-based DAQ systems: connecting the user and electronics front-ends

    International Nuclear Information System (INIS)

    Lenzi, Thomas

    2016-01-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  6. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Frans Meijers

    The installation of the 50 kHz DAQ/HLT system has been completed during 2008. The equipment consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the High Level Trigger (HLT) comprising 720 8-core PCs, and a 16-node storage manager system allowing a write throughput up to 2 GByte/s and a total capacity of 300 TByte. The 50 kHz DAQ system has been commissioned and has been put into service for global cosmics and commissioning data taking. During CRAFT, data was taken with the full detector at ~600 Hz cosmic trigger rate. Often an additional 20 kHz of random triggers were mixed, which were pre-scaled for storage.  The random rate has been increased to ~90 kHz for the commissioning and cosmics runs in 2009, which included all detectors except tracker. The DAQ system is used, in addition to global data taking, for further commissioning and testing of the central DAQ. To this end data emulators are used at the front-end of the central DAQ (in...

  7. The LHCb RICH Upgrade: Development of the DCS and DAQ system.

    CERN Multimedia

    Cavallero, Giovanni

    2018-01-01

    The LHCb experiment is preparing for an upgrade during the second LHC long shutdown in 2019-2020. In order to fully exploit the LHC flavour physics potential with a five-fold increase in the instantaneous luminosity, a trigger-less readout will be implemented. The RICH detectors will require new photon detectors and a brand new front-end electronics. The status of the integration of the RICH photon detector modules with the MiniDAQ, the prototype of the upgraded LHCb readout architecture, has been reported. The development of the prototype of the RICH Upgrade Experiment Control System, integrating the DCS and DAQ partitions in a single FSM, has been described. The status of the development of the RICH Upgrade Inventory, Bookkeeping and Connectivity database has been reported as well.

  8. The New CMS DAQ System for Run 2 of the LHC

    CERN Document Server

    AUTHOR|(CDS)2087644; Behrens, Ulf; Branson, James; Chaze, Olivier; Cittolin, Sergio; Darlea, Georgiana Lavinia; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Forrest, Andrew Kevin; Gigi, Dominique; Glege, Frank; Gomez Ceballos, Guillelmo; Gomez-Reino Garrido, Robert; Hegeman, Jeroen Guido; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; Vivian O'Dell; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Stieger, Benjamin Bastian; Sumorok, Konstanty; Veverka, Jan; Zejdl, Petr

    2015-01-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a micro-TCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation...

  9. Application of the ATLAS DAQ and Monitoring System for MDT and RPC Commissioning

    CERN Document Server

    Pasqualucci, E

    2007-01-01

    The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are heavily used for detector tests. The ROD Crate DAQ software is based on the ATLAS Readout application. Based on the plug-in mechanism, it provides a complete environment to interface any kind of detector or trigger electronics to the ATLAS DAQ system. All the possible flavours of this application are used to test and run the MDT and RPC detectors at the pre-commissioning and commissioning sites. Ad-hoc plug-ins have been developed to implement data readout via VME, both with ROD prototypes and emulating final electronics to read out data with temporary solutions, and to provide trigger distribution and busy management in a multi-crate environment. Data driven event building functionality is also used to combine data f...

  10. Development of DAQ-Middleware

    International Nuclear Information System (INIS)

    Yasu, Y; Nakayoshi, K; Sendai, H; Inoue, E; Tanaka, M; Suzuki, S; Satoh, S; Muto, S; Otomo, T; Nakatani, T; Uchida, T; Ando, N; Kotoku, T; Hirano, S

    2010-01-01

    DAQ-Middleware is a software framework of network-distributed DAQ system based on Robot Technology Middleware, which is an international standard of Object Management Group (OMG) in Robotics and its implementation was developed by AIST. DAQ-Component is a software unit of DAQ-Middleware. Basic components have been already developed. For examples, Gatherer is a readout component, Logger is a data logging component, Monitor is an analysis component and Dispatcher, which is connected to Gatherer as input of data path and to Logger/Monitor as output of data path. DAQ operator is a special component, which controls those components by using the control/status path. The control/status path and data path as well as XML-based system configuration and XML/HTTP-based system interface are well defined in DAQ-Middleware framework. DAQ-Middleware was adopted by experiments at J-PARC while the commissioning at the first beam had been successfully carried out. The functionality of DAQ-Middleware and the status of DAQ-Middleware at J-PARC are presented.

  11. Effective diagnostic DAQ systems to reduce unnecessary data in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taegu, E-mail: glory@nfri.re.kr; Lee, Woongryol; Hong, Jaesic; Park, Kaprai

    2016-11-15

    Highlights: • When plasma shots do not successfully perform during the intended target time, the diagnostics systems continue to record these unusable data, contributing to increasing data size. • To overcome this problem, some KSTAR’s library were upgraded to monitor the plasma status in real-time. • With the real-time information of plasma status, some of the KSTAR diagnostic systems stop the acquisition process of unnecessary data. • We were able to reduce the refuse data of approximately 698 GByte in the KSTAR 7th campaign. • It was a very effective way to store useful data, and it was helpful to analysts after shot. - Abstract: The plasma status of Korea Superconducting Tokamak Advanced Research (KSTAR) is measured by various diagnostics systems. The measured data size has been increasing every year due to increasing plasma pulse lengths, higher diagnostics operating frequencies, the additions of new diagnostic systems, and an increasing number of diagnostics channels. At times, when plasma shots do not successfully perform during the intended target time, the diagnostics systems continue to record these unusable data, contributing to increasing data size. In addition, the analysis time was affected, as these data need to be separated from the relevant data set. To overcome this problem, KSTAR’s Standard Framework (SFW), Real Time Monitoring (RTMON), and Pulse Automation and Scheduling System (PASS) were upgraded to monitor the plasma status in real-time. When the plasma current is less than 200kA, RTMON sends the plasma status information every second to the SFW via EPICS Channel Access. With the real-time information on plasma status, some of the KSTAR diagnostic systems stop the acquisition process of unnecessary data. This paper describes a method for reducing the storage of unnecessary data and its results in the KSTAR 7th campaign.

  12. Full system test of module to DAQ for ATLAS IBL

    Energy Technology Data Exchange (ETDEWEB)

    Behpour, Rouhina; Mattig, Peter; Wensing, Marius [Wuppertal University (Germany); Bindi, Marcello [Goettingen University (Germany)

    2015-07-01

    IBL (Insertable B Layer) as the inner most layer in the ATLAS detector at the LHC has been successfully integrated to the system last June 2014. IBL system reliability and consistency is under investigation during ongoing milestone runs at CERN. Back of Crate card (BOC) and Read out Driver (ROD) as two of the main electronic cards act as an interface between the IBL modules and the TDAQ chain. The detector data will be received and processed and then formatted by an interaction between these two electronic cards. The BOC takes advantage of using S-Link implementation inside the main FPGAs. The S-Link protocol as a standard high performance data acquisition link between the readout electronic cards and the TDAQ system is developed and used at CERN. It is based on the idea that detector formatted data will be transferred through optical fibers to the ROS (Read out System) PC for being stored via the ROBIN (Read out Buffer) cards. This talk presents the results that confirm a stable and good performance of the system, from the modules to the read out electronic cards and then to the ROS PCs via S-Link.

  13. The trigger and DAQ systems of the NA59 experiment

    CERN Document Server

    Ünel, Gokhan; Ballestrero, Sergio

    2004-01-01

    The NA59 experiment on the CERN SPS-H2 beam-line took data during the summers of 1999 and 2000 to perform intercalibration studies of polarization measurement and to test the use of an aligned crystal as a quarter-wave plate. The analysis revealed a proof of concept for the birefringence property of aligned crystals for photons in the 30-170 GeV energy range. The 90-m-long detector for this fixed target experiment had two independent readout schemes: one for more than 120 time-to-digital and analog-to-digital converter channels to obtain tracking and energy information; and another for the readout of the silicon strip detectors to improve vertex resolution. The readout electronics of the Na59 experiment was based on VMEbus and CAMAC systems. Novel data acquisition and online monitoring software were written to work on the commodity hardware (PCs) running mainly the Linux operating system. 21 Refs.

  14. Applications of an OO (Objected Oriented) methodology and case to a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAD components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral life cycle if supports. (author)

  15. Performance and scalability of the back-end sub-system in the ATLAS DAQ/EF prototype

    CERN Document Server

    Alexandrov, I N; Badescu, E; Burckhart, Doris; Caprini, M; Cohen, L; Duval, P Y; Hart, R; Jones, R; Kazarov, A; Kolos, S; Kotov, V; Laugier, D; Mapelli, Livio P; Moneta, L; Qian, Z; Radu, A A; Ribeiro, C A; Roumiantsev, V; Ryabov, Yu; Schweiger, D; Soloviev, I V

    2000-01-01

    The DAQ group of the future ATLAS experiment has developed a prototype system based on the trigger/DAQ architecture described in the ATLAS Technical Proposal to support studies of the full system functionality, architecture as well as available hardware and software technologies. One sub-system of this prototype is the back- end which encompasses the software needed to configure, control and monitor the DAQ, but excludes the processing and transportation of physics data. The back-end consists of a number of components including run control, configuration databases and message reporting system. The software has been developed using standard, external software technologies such as OO databases and CORBA. It has been ported to several C++ compilers and operating systems including Solaris, Linux, WNT and LynxOS. This paper gives an overview of the back-end software, its performance, scalability and current status. (17 refs).

  16. DAQ application of PC oscilloscope for chaos fiber-optic fence system based on LabVIEW

    Science.gov (United States)

    Lu, Manman; Fang, Nian; Wang, Lutang; Huang, Zhaoming; Sun, Xiaofei

    2011-12-01

    In order to obtain simultaneously high sample rate and large buffer in data acquisition (DAQ) for a chaos fiber-optic fence system, we developed a double-channel high-speed DAQ application of a digital oscilloscope of PicoScope 5203 based on LabVIEW. We accomplished it by creating call library function (CLF) nodes to call the DAQ functions in the two dynamic link libraries (DLLs) of PS5000.dll and PS5000wrap.dll provided by Pico Technology Company. The maximum real-time sample rate of the DAQ application can reach 1GS/s. We can control the resolutions of the application at the sample time and data amplitudes by changing their units in the block diagram, and also control the start and end times of the sampling operations. The experimental results show that the application has enough high sample rate and large buffer to meet the demanding DAQ requirements of the chaos fiber-optic fence system.

  17. A potent approach for the development of FPGA based DAQ system for HEP experiments

    Science.gov (United States)

    Khan, Shuaib Ahmad; Mitra, Jubin; David, Erno; Kiss, Tivadar; Nayak, Tapan Kumar

    2017-10-01

    With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.

  18. Development of the Calibrator of Reactivity Meter Using PC-Based DAQ System

    International Nuclear Information System (INIS)

    Edison; Mariatmo, A.; Sujarwono

    2007-01-01

    The reactivity meter calibrator has been developed by applying the PC-Based DAQ System programmed using LabVIEW. The Output of the calibrator is voltage proportional to neutron density n(t) corresponding to the step reactivity change ρ 0 . The “Kalibrator meter reactivitas.vi” program calculates seven roots and coefficients of solution n(t) of Reactor Kinetic equation using the in-hour equation. Based on data of dt = t k+1 - t k and t 0 = 0 input by user, the program approximates n(t) for each time interval t k ≤ t k+1 , where k = 0, 1, 2, 3, .... by a step function n(t) = n 0 ∑ j=1 7 A j e ω j t k . Then the program commands the DAQ device to output voltage V(t) = n(t) Volt at time t. The measurement of standard reactivity with the meter reactivity showed that the maximum deviation of measured reactivity from its standard were less than 1 %. (author)

  19. A potent approach for the development of FPGA based DAQ system for HEP experiments

    International Nuclear Information System (INIS)

    Khan, Shuaib Ahmad; Mitra, Jubin; Nayak, Tapan Kumar; David, Erno; Kiss, Tivadar

    2017-01-01

    With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.

  20. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  1. Commissioning and integration testing of the DAQ system for the CMS GEM upgrade

    CERN Document Server

    Castaneda Hernandez, Alfredo Martin

    2017-01-01

    The CMS muon system will undergo a series of upgrades in the coming years to preserve and extend its muon detection capabilities during the High Luminosity LHC.The first of these will be the installation of triple-foil GEM detectors in the CMS forward region with the goal of maintaining trigger rates and preserving good muon reconstruction, even in the expected harsh environment.In 2017 the CMS GEM project is looking to achieve a major milestone in the project with the installation of 5 super-chambers in CMS; this exercise will allow for the study of services installation and commissioning, and integration with the rest of the subsystems for the first time. An overview of the DAQ system will be given with emphasis on the usage during chamber quality control testing, commissioning in CMS, and integration with the central CMS system.

  2. New COMPASS DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yunpeng; Konorov, Igor

    2015-07-01

    This contribution focuses on the deployment and first results of the new FPGA-based data acquisition system (DAQ) of the COMPASS experiment. Since 2002, the number of channels increased to approximately 300000, trigger rate increased to 30 kHz; the average event size remained roughly 35 kB. In order to handle the increased data rates, the new DAQ system with custom FPGA based data handling cards (DHC) had been decided to replace the event building network. The DHCs are equipped with 16 high speed serial links, 2GB of DDR3 memory with bandwidth of 6 GB/s, Gigabit Ethernet connection, and COMPASS Trigger Control System. It uses two different firmware versions: multiplexer and switch. The multiplexer DHC can combine 15 incoming links into one outgoing, whereas the switch combines 8 data streams from multiplexers and using information from look-up table sends the full events to the readout engine servers equipped by spillbuffer PCI-Express cards that receive the data. Both types of DHC can buffer data which allows to distribute the load over the cycle of accelerator. For the purposes of configuration, run control, and monitoring, software tools are developed. Communication between processes in the system is implemented using the DIM library. The DAQ is fully configurable from the web interface. New DAQ system has been deployed for the pilot run starting from the September 2014. In the poster, the preliminary performance and stability results of the new DAQ are presented and compared with the original system in more detail.

  3. Highly Accurate Timestamping for Ethernet-Based Clock Synchronization

    OpenAIRE

    Loschmidt, Patrick; Exel, Reinhard; Gaderer, Georg

    2012-01-01

    It is not only for test and measurement of great importance to synchronize clocks of networked devices to timely coordinate data acquisition. In this context the seek for high accuracy in Ethernet-based clock synchronization has been significantly supported by enhancements to the Network Time Protocol (NTP) and the introduction of the Precision Time Protocol (PTP). The latter was even applied to instrumentation and measurement applications through the introduction of LXI....

  4. DAQ INSTALLATION IN USC COMPLETED

    CERN Multimedia

    A. Racz

    After one year of work at P5 in the underground control rooms (USC55-S1&S2), the DAQ installation in USC55 is completed. The first half of 2006 was dedicated to the DAQ infrastructures installation (private cable trays, rack equipment for a very dense cabling, connection to services i.e. water, power, network). The second half has been spent to install the custom made electronics (FRLs and FMMs) and place all the inter-rack cables/fibers connecting all sub-systems to central DAQ (more details are given in the internal pages). The installation has been carried out by DAQ group members, coming from the hardware and software side as well. The pictures show the very nice team spirit !

  5. NOvA Event Building, Buffering and Data-Driven Triggering From Within the DAQ System

    Energy Technology Data Exchange (ETDEWEB)

    Fischler, M. [Fermilab; Green, C. [Fermilab; Kowalkowski, J. [Fermilab; Norman, A. [Fermilab; Paterno, M. [Fermilab; Rechenmacher, R. [Fermilab

    2012-06-22

    To make its core measurements, the NOvA experiment needs to make real-time data-driven decisions involving beam-spill time correlation and other triggering issues. NOvA-DDT is a prototype Data-Driven Triggering system, built using the Fermilab artdaq generic DAQ/Event-building tools set. This provides the advantages of sharing online software infrastructure with other Intensity Frontier experiments, and of being able to use any offline analysis module--unchanged--as a component of the online triggering decisions. The NOvA-artdaq architecture chosen has significant advantages, including graceful degradation if the triggering decision software fails or cannot be done quickly enough for some fraction of the time-slice ``events.'' We have tested and measured the performance and overhead of NOvA-DDT using an actual Hough transform based trigger decision module taken from the NOvA offline software. The results of these tests--98 ms mean time per event on only 1/16 of th e available processing power of a node, and overheads of about 2 ms per event--provide a proof of concept: NOvA-DDT is a viable strategy for data acquisition, event building, and trigger processing at the NOvA far detector.

  6. Design of low noise front-end ASIC and DAQ system for CdZnTe detector

    International Nuclear Information System (INIS)

    Luo Jie; Deng Zhi; Liu Yinong

    2012-01-01

    A low noise front-end ASIC has been designed for CdZnTe detector. This chip contains 16 channels and each channel consists of a dual-stage charge sensitive preamplifier, 4th order semi-Gaussian shaper, leakage current compensation (LCC) circuit, discriminator and output buffer. This chip has been fabricated in Chartered 0.35 μm CMOS process, the preliminary results show that it works well. The total channel charge gain can be adjusted from 100 mV/fC to 400 mV/fC and the peaking time can be adjusted from 1 μs to 4 μs. The minimum measured ENC at zero input capacitance is 70 e and minimum noise slope is 20 e/pF. The peak detector and derandomizer (PDD) ASIC developed by BNL and an associated USB DAQ board are also introduced in this paper. Two front-end ASICs can be connected to the PDD ASIC on the USB DAQ board and compose a 32 channels DAQ system for CdZnTe detector. (authors)

  7. NOvA Event Building, Buffering and Data-Driven Triggering From Within the DAQ System

    International Nuclear Information System (INIS)

    Fischler, M; Rechenmacher, R; Green, C; Kowalkowski, J; Norman, A; Paterno, M

    2012-01-01

    The NOvA experiment is a long baseline neutrino experiment design to make precision probes of the structure of neutrino mixing. The experiment features a unique deadtimeless data acquisition system that is capable acquiring and building an event data stream from the continuous readout of the more than 360,000 far detector channels. In order to achieve its physics goals the experiment must be able to buffer, correlate and extract the data in this stream with the beam-spills that occur that Fermilab. In addition the NOvA experiment seeks to enhance its data collection efficiencies for rare class of event topologies that are valuable for calibration through the use of data driven triggering. The NOvA-DDT is a prototype Data-Driven Triggering system. NOvA-DDT has been developed using the Fermilab artdaq generic DAQ/Event-building toolkit. This toolkit provides the advantages of sharing online software infrastructure with other Intensity Frontier experiments, and of being able to use any offline analysis module-unchanged-as a component of the online triggering decisions. We have measured the performance and overhead of NOvA-DDT framework using a Hough transform based trigger decision module developed for the NOvA detector to identify cosmic rays. The results of these tests which were run on the NOvA prototype near detector, yielded a mean processing time of 98 ms per event, while consuming only 1/16th of the available processing capacity. These results provide a proof of concept that a NOvA-DDT based processing system is a viable strategy for data acquisition and triggering for the NOvA far detector.

  8. The 2002 Test Beam DAQ

    CERN Multimedia

    Mapelli, L.

    The ATLAS Tilecal group has been the first user of the Test Beam version of the DAQ/EF-1 prototype in 2000. The prototype was successfully tested in lab in summer 1999 and it has been officially adopted as baseline solution for the Test Beam DAQ at the end of 1999. It provides the right solution for users who need to have a modern data acquisition chain for final or almost final front-end and off-detector electronics (RODs and ROD emulators). The typical architecture for the readout and the DAQ is sketched in the figure below. A number of detector crates can send data over the Read Out Link to the Read Out System. The Read Out System sends data over an Ethernet link to a SubFarm PC that provides to send the data to Central Data Recording. In 2001 also the Muon MDT group has adopted this modern DAQ where for the first time a PC-based ReadOut System has been used, instead of the VME based implementation used in 2000, and for the Tilecal DAQ in 2001. In 2002 also Tilecal has adopted the PC-based implement...

  9. Development of the DAQ System of Triple-GEM Detectors for the CMS Muon Spectrometer Upgrade at LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00387583

    The Gas Electron Multiplier (GEM) upgrade project aims at improving the performance of the muon spectrometer of the Compact Muon Solenoid (CMS) experiment which will suffer from the increase in luminosity of the Large Hadron Collider (LHC). After a long technical stop in 2019-2020, the LHC will restart and run at a luminosity of 2 × 1034 cm−2 s−1, twice its nominal value. This will in turn increase the rate of particles to which detectors in CMS will be exposed and affect their performance. The muon spectrometer in particular will suffer from a degraded detection efficiency due to the lack of redundancy in its most forward region. To solve this issue, the GEM collaboration proposes to instrument the first muon station with Triple-GEM detectors, a technology which has proven to be resistant to high fluxes of particles. Within the GEM collaboration, the Data Acquisition (DAQ) subgroup is in charge of the development of the electronics and software of the DAQ system of the detectors. This thesis presents th...

  10. DAQ Architecture for the LHCb Upgrade

    International Nuclear Information System (INIS)

    Liu, Guoming; Neufeld, Niko

    2014-01-01

    LHCb will have an upgrade of its detector in 2018. After the upgrade, the LHCb experiment will run at a high luminosity of 2 × 10 33 cm −2 s −1 . The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of LHCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever Trigger farm at an aggregate throughput of ∼ 32 Tbit/s. The DAQ system will be based on high speed network technologies such as InfiniBand and/or 10/40/100 Gigabit Ethernet. Independent of the network technology, there are different possible architectures for the DAQ system. In this paper, we present our studies on the DAQ architecture, where we analyze size, complexity and relative cost. We evaluate and compare several data-flow schemes for a network-based DAQ: push, pull and push with barrel-shifter traffic shaping. We also discuss the requirements and overall implications of the data-flow schemes on the DAQ system.

  11. LHCb; DAQ Architecture for the LHCb Upgrade

    CERN Multimedia

    Neufeld, N

    2013-01-01

    LHCb will have an upgrade of its detector in 2018. After the upgrade, the LHCb experiment will run at a high luminosity of 2x 10$^{33}$ cm$^{-2}$ . s$^{-1}$. The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of HCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever Trigger farm at an aggregate throughput of 32 Tbit/s. The DAQ system will be based on high speed network technologies such as InfiniBand and/or 10/40/100 Gigabit Ethernet. Independent of the network technology, there are different possible architectures for the DAQ system. In this paper, we present our studies on the DAQ architecture, where we analyze size, complexity and (relative) cost. We evaluate and compare several data-flow schemes for a network-based DAQ: push, pull and push with barrel-shifter traffic shaping. We also discuss the requirements and overall implications of the data-flow schemes on the DAQ ...

  12. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Gerry Bauer

    The CMS Storage Manager System The tail-end of the CMS Data Acquisition System is the Storage Manger (SM), which collects output from the HLT and stages the data at Cessy for transfer to its ultimate home in the Tier-0 center. A SM system has been used by CMS for several years with the steadily evolving software within the XDAQ framework, but until relatively recently, only with provisional hardware. The SM is well known to much of the collaboration through the ‘MiniDAQ’ system, which served as the central DAQ system in 2007, and lives on in 2008 for dedicated sub-detector commissioning. Since March of 2008 a first phase of the final hardware was commissioned and used in CMS Global Runs. The system originally planned for 2008 aimed at recording ~1MB events at a few hundred Hz. The building blocks to achieve this are based on Nexsan's SATABeast storage array - a device  housing up to 40 disks of 1TB each, and possessing two controllers each capable of almost 200 MB/sec throughput....

  13. A DAQ-Device-Based Continuous Wave Near-Infrared Spectroscopy System for Measuring Human Functional Brain Activity

    Directory of Open Access Journals (Sweden)

    Gang Xu

    2014-01-01

    Full Text Available In the last two decades, functional near-infrared spectroscopy (fNIRS is getting more and more popular as a neuroimaging technique. The fNIRS instrument can be used to measure local hemodynamic response, which indirectly reflects the functional neural activities in human brain. In this study, an easily implemented way to establish DAQ-device-based fNIRS system was proposed. Basic instrumentation components (light sources driving, signal conditioning, sensors, and optical fiber of the fNIRS system were described. The digital in-phase and quadrature demodulation method was applied in LabVIEW software to distinguish light sources from different emitters. The effectiveness of the custom-made system was verified by simultaneous measurement with a commercial instrument ETG-4000 during Valsalva maneuver experiment. The light intensity data acquired from two systems were highly correlated for lower wavelength (Pearson’s correlation coefficient r = 0.92, P < 0.01 and higher wavelength (r = 0.84, P < 0.01. Further, another mental arithmetic experiment was implemented to detect neural activation in the prefrontal cortex. For 9 participants, significant cerebral activation was detected in 6 subjects (P < 0.05 for oxyhemoglobin and in 8 subjects (P < 0.01 for deoxyhemoglobin.

  14. Time synchronization for an Ethernet-based real-time token network

    NARCIS (Netherlands)

    Hanssen, F.T.Y.; van den Boom, Joost; Jansen, P.G.; Scholten, Johan

    We present a distributed clock synchronization algorithm. It performs clock synchronization on an Ethernet-based real-time token local area network, without the use of an external clock source. It is used to enable the token schedulers in each node to agree upon a common time. Its intended use is in

  15. Using Linux PCs in DAQ applications

    CERN Document Server

    Ünel, G; Beck, H P; Cetin, S A; Conka, T; Crone, G J; Fernandes, A; Francis, D; Joosb, M; Lehmann, G; López, J; Mailov, A A; Mapelli, Livio P; Mornacchi, Giuseppe; Niculescu, M; Petersen, J; Tremblet, L J; Veneziano, Stefano; Wildish, T; Yasu, Y

    2000-01-01

    ATLAS Data Acquisition/Event Filter "-1" (DAQ/EF1) project provides the opportunity to explore the use of commodity hardware (PCs) and Open Source Software (Linux) in DAQ applications. In DAQ/EF-1 there is an element called the LDAQ which is responsible for providing local run-control, error-handling and reporting for a number of read- out modules in front end crates. This element is also responsible for providing event data for monitoring and for the interface with the global control and monitoring system (Back-End). We present the results of an evaluation of the Linux operating system made in the context of DAQ/EF-1 where there are no strong real-time requirements. We also report on our experience in implementing the LDAQ on a VMEbus based PC (the VMIVME-7587) and a desktop PC linked to VMEbus with a Bit3 interface both running Linux. We then present the problems encountered during the integration with VMEbus, the status of the LDAQ implementation and draw some conclusions on the use of Linux in DAQ applica...

  16. ATLAS DAQ/HLT rack DCS

    International Nuclear Information System (INIS)

    Ermoline, Yuri; Burckhart, Helfried; Francis, David; Wickens, Frederick J.

    2007-01-01

    The ATLAS Detector Control System (DCS) group provides a set of standard tools, used by subsystems to implement their local control systems. The ATLAS Data Acquisition and High Level Trigger (DAQ/HLT) rack DCS provides monitoring of the environmental parameters (air temperatures, humidity, etc.). The DAQ/HLT racks are located in the underground counting room (20 racks) and in the surface building (100 racks). The rack DCS is based on standard ATLAS tools and integrated into overall operation of the experiment. The implementation is based on the commercial control package and additional components, developed by CERN Joint Controls Project Framework. The prototype implementation and measurements are presented

  17. Future of DAQ Frameworks and Approaches, and Their Evolution towards the Internet of Things

    Science.gov (United States)

    Neufeld, Niko

    2015-12-01

    Nowadays, a DAQ system is a complex network of processors, sensors and many other active devices. Historically, providing a framework for DAQ has been a very important role of host institutes of experiments. Reviewing evolution of such DAQ frameworks is a very interesting subject of the conference. “Internet of Things” is a recent buzz word but a DAQ framework could be a good example of IoT.

  18. DAQ systems for the high energy and nuclotron internal target polarimeters with network access to polarization calculation results and raw data

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2004-01-01

    On-line data acquisition (DAQ) system for the Nuclotron Internal Target Polarimeter (ITP) at the LHE, JINR, is explained in respect of design and implementation, based on the distributed data acquisition and processing system qdpb. Software modules specific for this implementation (dependent on ITP data contents and hardware layout) are discussed briefly in comparison with those for the High Energy Polarimeter (HEP) at the LHE, JINR. User access methods both to raw data and to results of polarization calculations of the ITP and HEP are discussed

  19. A DAQ system for the experiment of physics based on G-Link

    International Nuclear Information System (INIS)

    Jiang Xiao; Jin Ge

    2007-01-01

    In this paper, a high-speed fiber data transfer system based on G-Link for the experiment of physics is introduced. The architecture and configuration of the fiber link with core chips, HDMP-1022/ 1024, the driver circuit of laser diode and the CIMT coding technology are described. With this high- speed fiber data transfer technology, a 16-channel data acquisition system is designed and is used in an experiment of wind tunnel. (authors)

  20. Development and test of the DAQ system for a Micromegas prototype installed into the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m$^2$ that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible ReadOutDriver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  1. Build of tri-crosscheck platform for complex HDL design in LHCb's DAQ system

    International Nuclear Information System (INIS)

    Hou Lei; Gong Guanghua; Shao Beibei

    2008-01-01

    TELL1 is the off-detector electronics acquisition readout board for the LHCb experiment. In the development of TELL1, three data stream systems are built to tri-crosscheck the complex VHDL implementation for the FPGAs employed by TELL1. This paper will introduce the tri-crosscheck platform as well as the way they are used in the testing. (authors)

  2. The readiness of the ATLAS Trigger-DAQ system for the second LHC run

    CERN Document Server

    Rammensee, Michael; The ATLAS collaboration

    2015-01-01

    After its first shutdown, the Large Hadron Collider (LHC) will provide proton-proton collisions with increased luminosity and energy. In the ATLAS experiment~\\cite{Atlas}, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates~\\cite{TDAQPhase1}. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. Design choices and the strategies employed to minimize the data-collection and the selection latency will be discussed. First results of tests done during the commissioning phase and the operational performance after the first months of data taking will be presented.

  3. The readiness of ATLAS Trigger-DAQ system for the second LHC run

    CERN Document Server

    Rammensee, Michael; The ATLAS collaboration

    2015-01-01

    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The trigger system in ATLAS consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. The pre-existing two-level software filtering, known as L2 and the Event Filter, are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architec...

  4. Ethernet-based mobility architecture for 5G

    DEFF Research Database (Denmark)

    Cattoni, Andrea Fabio; Mogensen, Preben; Vesterinen, Seppo

    2014-01-01

    of mobile devices and sensors. In this paper we propose a paradigm shift for the evolved Packet Core for the future 5G system. By leveraging on the economy of scale of software–based ICT technologies, namely Software Defined Networking and cloud computing, we propose a hierarchically cloudified mobile...... network. In particular, in this paper we focus on the mobility aspects within such new architecture, proposing low latency Layer 2 solutions for the Access Network, while exploiting aggregating Layer 3 mobility functionalities in the regional and national clouds....

  5. BioDAQ--a simple biosignal acquisition system for didactic use.

    Science.gov (United States)

    Csaky, Z; Mihalas, G I; Focsa, M

    2002-01-01

    A simple non expensive device for biosignal acquisition is presented. It mainly meets the requirements for didactic purposes specific in medical informatics laboratory classes. The system has two main types of devices: 'student unit'--the simplest one, used during lessons on real signals and 'demo unit', which can be also used in medical practice or for collecting biological signals. It is able to record: optical pulse, sphygmogram, ECG (1-4 leads) EEG or EMG (1-4 channels). For didactical purposes it has a large scale of recording options: variable sampling rate, gain and filtering. It can also be used in tele-acquisition via Internet.

  6. Characterization of a DAQ system for the readout of a SiPM based shashlik calorimeter

    International Nuclear Information System (INIS)

    Berra, A.; Bonvicini, V.; Bosisio, L.; Lietti, D.; Penzo, A.; Prest, M.; Rabaioli, S.; Rashevskaya, I.; Vallazza, E.

    2014-01-01

    Silicon PhotoMultipliers (SiPMs) are a recently developed type of silicon photodetector characterized by high gain and insensitivity to magnetic fields, which make them a suitable detector for the next generation high energy and space physics experiments. This paper presents the performance of a readout system for SiPMs based on the MAROC3 ASIC. The ASIC consists of 64 channels working in parallel, each one with a variable gain pre-amplifier, a tunable slow shaper with a sample and hold circuit for the analog readout and a tunable fast shaper for the digital one. In the tests described in this paper, only the analog part of the ASIC has been used. A frontend board based on the MAROC3 ASIC has been tested at CERN coupled to a scintillator-lead shashlik calorimeter, readout with 36 large area SiPMs. The performance of the system has been characterized in terms of linearity and energy resolution on the CERN PS-T9 and SPS-H2 beamlines, using different configurations of the ASIC parameters

  7. A PandaRoot interface for binary data in the PANDA prototype DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Fleischer, Soeren; Lange, Soeren; Kuehn, Wolfgang; Hahn, Christopher; Wagner, Milan [2. Physikalisches Institut, Uni Giessen (Germany); Collaboration: PANDA-Collaboration

    2015-07-01

    The PANDA experiment at FAIR will feature a raw data rate of more than 20 MHz. Only a small fraction of these events are of interest. Consequently, a sophisticated online data reduction setup is required, lowering the final output data rate by a factor of roughly 10{sup 3} by discarding data which does not fulfil certain criteria. The first stages of the data reduction will be implemented using FPGA-based Compute Nodes. For the planned tests with prototype detectors a small but scalable system is being set up which will allow to test the concept in a realistic environment with high rates. In this contribution, we present a PandaRoot implementation of a state-machine-based binary parser which receives detector data from the Compute Nodes via GbE links, converting the data stream into the PandaRoot format for further analysis and mass storage.

  8. Flexible DAQ card for detector systems utilizing the CoaXPress communication standard

    International Nuclear Information System (INIS)

    Neue, G.; Hejtmánek, M.; Marčišovský, M.; Voleš, P.

    2015-01-01

    This work concerns the design and construction of a flexible FPGA based data acquisition system aimed for particle detectors. The interface card as presented was designed for large area detectors with millions of individual readout channels. Flexibility was achieved by partitioning the design into multiple PCBs, creating a set of modular blocks, allowing the creation of a wide variety of configurations by simply stacking functional PCBs together. This way the user can easily toggle the polarity of the high voltage bias supply or switch the downstream interface from CoaXPress to PCIe or stream directly HDMI. We addressed the issues of data throughput, data buffering, bias voltage generation, trigger timing and fine tuning of the whole readout chain enabling a smooth data transmission. On the current prototype, we have wire-bonded a MediPix2 MXR quad and connected it to a XILINX FPGA. For the downstream interface, we implemented the CoaXPress communication protocol, which enables us to stream data at 3.125 Gbps to a standard PC

  9. PCI Based Read-out Receiver Card in the ALICE DAQ System

    CERN Document Server

    Carena, W; Dénes, E; Divià, R; Schossmaier, K; Soós, C; Sulyán, J; Vascotto, Alessandro; Van de Vyvre, P

    2001-01-01

    The Detector Data Link (DDL) is the high-speed optical link for the ALICE experiment. This link shall transfer the data coming from the detectors at 100 MB/s rate. The main components of the link have been developed: the destination Interface Unit (DIU), the Source Interface Unit (SIU) and the Read-out Receiver Card (RORC). The first RORC version is based on the VME bus. The performance tests show that the maximum VME bandwidth could be reached. Meanwhile the PCI bus became very popular and is used in many platforms. The development of a PCI-based version has been started. The document describes the prototype version in three sections. An overview explains the main purpose of the card: to provide an interface between the DDL and the PCI bus. Acting as a 32bit/33MHz PCI master the card is able to write or read directly to or from the system memory from or to the DDL, respectively. Beside these functions the card can also be used as an autonomous data generator. The card has been designed to be well adapted to ...

  10. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Attila Racz

    DAQ/On-Line Computing installation status After the installation and commissioning of the DAQ underground elements in 2006 and the first months of 2007, all the efforts are now directed to the installation and commissioning of the On-Line Computing farm (OLC) located on the first floor of SCX5 building at the CMS experimental site. In summer 2007, 640 Readout Unit servers (RUs) have been installed and commissioned along with 160 servers providing general services for the users (DCS, database, RCMS, data storage, etc). Since the global run of November 2007, the event fragments are assembled and processed by the OLC. Thanks to the flexibility of the trapezoidal event builder, some RUs are acting as Filter Units (FUs) and hence provide the full processing chain with a single type of server. With this temporary configuration, all FEDs can be readout at a few kHz. Since the March 08 global run, events are stored on the storage manager SAN in the OLC, and subsequently transferred over the dedicated CDR link (2 x...

  11. Overview of DAQ developments for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Emschermann, David [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Compressed Baryonic Matter experiment (CBM) at the future Facility for Antiproton and Ion Research (FAIR) is a a fixed-target setup operating at very high interaction rates up to 10 MHz. The high rate capability can be achieved with fast and radiation hard detectors equipped with free-streaming readout electronics. A high-speed data acquisition (DAQ) system will forward data volumes of up to 1 TB/s from the CBM cave to the first level event selector (FLES), located 400 m apart. This presentation showcases recent developments of DAQ components for CBM. We highlight the anticipated DAQ setup for beam tests scheduled for the end of 2015.

  12. Applications of CORBA in the ATLAS prototype DAQ

    CERN Document Server

    Jones, R; Mapelli, Livio P; Ryabov, Yu

    2000-01-01

    This paper presents the experience of using the Common Object Request Broker Architecture (CORBA) in the ATLAS prototype DAQ project. Many communication links in the DAQ system have been designed and implemented using the CORBA standard. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA based communications between DAQ components in a local area network (LAN) of heterogeneous computers. The CORBA Naming Service provides the principal mechanism through which most clients of an ORE-based system locate objects that they intend to use. In our project, conventions are employed that meaningfully partition the name space of the Naming Service according to divisions in the DAQ system itself. The Inter Process Communication (IPC) package, implemented in C++ on the top of CORBA/ILU, incorporates this facility and hides the details of the naming schema is described. The development procedure and environment for remote database access using IPC is described. Various end-use...

  13. Research and development of common DAQ platform

    International Nuclear Information System (INIS)

    Higuchi, T.; Igarashi, Y.; Nakao, M.; Suzuki, S.Y.; Tanaka, M.; Nagasaka, Y.; Varner, G.

    2003-01-01

    The upgrade of the KEKB accelerator toward L=10 35 cm -2 s -1 requires an upgrade of the Belle data acquisition system. To match the market trend, we develop a DAQ platform based on the PCI bus that enables fastest DAQ with longer lifetime of the system. The platform is a VME-9U motherboard comprising of four slots for signal digitization modules and three PMC slots to house CPU for data compression. The platform is equipped with event FIFOs for data buffering to minimize the dead-time. A trigger module residing on VME-6U size rear board is connected to the 9U board via PCI-PCI bridge to make an interrupt for the CPU upon the level-1 trigger. (author)

  14. artdaq: DAQ software development made simple

    Science.gov (United States)

    Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron

    2017-10-01

    For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.

  15. BTeV trigger/DAQ innovations

    International Nuclear Information System (INIS)

    Votava, Margaret

    2005-01-01

    The BTeV experiment was a collider based high energy physics (HEP) B-physics experiment proposed at Fermilab. It included a large-scale, high speed trigger/data acquisition (DAQ) system, reading data off the detector at 500 Gbytes/sec and writing to mass storage at 200 Mbytes/sec. The online design was considered to be highly credible in terms of technical feasibility, schedule and cost. This paper will give an overview of the overall trigger/DAQ architecture, highlight some of the challenges, and describe the BTeV approach to solving some of the technical challenges. At the time of termination in early 2005, the experiment had just passed its baseline review. Although not fully implemented, many of the architecture choices, design, and prototype work for the online system (both trigger and DAQ) were well on their way to completion. Other large, high-speed online systems may have interest in the some of the design choices and directions of BTeV, including (a) a commodity-based tracking trigger running asynchronously at full rate, (b) the hierarchical control and fault tolerance in a large real time environment, (c) a partitioning model that supports offline processing on the online farms during idle periods with plans for dynamic load balancing, and (d) an independent parallel highway architecture

  16. Implementation of an Ethernet-Based Communication Channel for the Patmos Processor

    DEFF Research Database (Denmark)

    Pezzarossa, Luca; Kenn Toft, Jakob; Lønbæk, Jesper

    The Patmos processor, which is used as the intellectual property of the T-CREST platform, is only equipped with a RS-232 serial port for communication with the outside world. The serial port is a minimal input/output device with a limited speed and without native networking features. An Ethernet 10....../100BASE-T IEEE 802.3 based communication channel is a reliable and high speed communication interface (10/100 Mbits/s) that also supports networking. This technical report presents an implementation of an Ethernet-based communication channel for the Patmos processor, targeting the Terasic DE2......-115 development board. We have designed the hardware to interface the EthMac Ethernet controller from OpenCores to Patmos and to the physical chip of the development board, and we have implemented a software library to drive the controller and to support some essential protocols. The design was implemented...

  17. LHCb DAQ network upgrade tests

    CERN Document Server

    Pisani, Flavio

    2013-01-01

    My project concerned the evaluation of new technologies for the DAQ network upgrade of LHCb. The first part consisted in developing and Open Flow-based Clos network. This new technology is very interesting and powerful but, as shown by the results, it still needs further improvements. The second part consisted in testing and benchmarking 40GbE network equipment: Mellanox MT27500, Chelsio T580 and Huawei Cloud Engine 12804. An event-building simulation is currently been performed in order to check the feasibility of the DAQ network upgrade in LS2. The first results are promising.

  18. Communication between Trigger/DAQ and DCS in ATLAS

    International Nuclear Information System (INIS)

    Burckhart, H.; Jones, R.; Hart, R.; Khomoutnikov, V.; Ryabov, Y.

    2001-01-01

    Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated. Nevertheless there is a need to communicate. The initial problem definition and analysis suggested three subsystems the Trigger/DAQ DCS Communication (DDC) project should support the ability to: 1. exchange data between Trigger/DAQ and DCS; 2. send alarm messages from DCS to Trigger/DAQ; 3. issue commands to DCS from Trigger/DAQ. Each subsystem is developed and implemented independently using a common software infrastructure. Among the various subsystems of the ATLAS Trigger/DAQ the Online is responsible for the control and configuration. It is the glue connecting the different systems such as data flow, level 1 and high-level triggers. The DDC uses the various Online components as an interface point on the Trigger/DAQ side with the PVSS II SCADA system on the DCS side and addresses issues such as partitioning, time stamps, event numbers, hierarchy, authorization and security. PVSS II is a commercial product chosen by CERN to be the SCADA system for all LHC experiments. Its API provides full access to its database, which is sufficient to implement the 3 subsystems of the DDC software. The DDC project adopted the Online Software Process, which recommends a basic software life-cycle: problem statement, analysis, design, implementation and testing. Each phase results in a corresponding document or in the case of the implementation and testing, a piece of code. Inspection and review take a major role in the Online software process. The DDC documents have been inspected to detect flaws and resulted in a improved quality. A first prototype of the DDC is ready and foreseen to be used at the test-beam during summer 2001

  19. Trigger and DAQ in the Combined Test Beam

    CERN Multimedia

    Dobson, M; Padilla, C

    2004-01-01

    Introduction During the Combined Test Beam the latest prototype of the ATLAS Trigger and DAQ system is being used to support the data taking of all the detectors. Further development of the TDAQ subsystems benefits from the direct experience given by the integration in the beam test. Support of detectors for the Combined Test Beam All ATLAS detectors need their own detector-specific DAQ development. The readout electronics is controlled by a Readout Driver (ROD), custom-built for each detector. The ROD receives data for events that are accepted by the first level trigger. The detector-specific part of the DAQ system needs to control the ROD and to respond to commands of the central DAQ (e.g. to "Start" a run). The ROD module then sends event data to a Readout System (ROS), a PC with special receiver modules/buffers. At this point the data enters the realm of the ATLAS DAQ and High Level Trigger system, constructed from Linux PCs connected with gigabit Ethernet networks. Most ATLAS detectors, representing s...

  20. The ALICE DAQ infoLogger

    Science.gov (United States)

    Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Grigore, A.; Ionita, C.; Delort, C.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Von Haller, B.; Alice Collaboration

    2014-04-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion experiment studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the detectors through 500 dedicated optical links at an aggregated and sustained rate of up to 10 Gigabytes per second and stores at up to 2.5 Gigabytes per second. The infoLogger is the log system which collects centrally the messages issued by the thousands of processes running on the DAQ machines. It allows to report errors on the fly, and to keep a trace of runtime execution for later investigation. More than 500000 messages are stored every day in a MySQL database, in a structured table keeping track for each message of 16 indexing fields (e.g. time, host, user, ...). The total amount of logs for 2012 exceeds 75GB of data and 150 million rows. We present in this paper the architecture and implementation of this distributed logging system, consisting of a client programming API, local data collector processes, a central server, and interactive human interfaces. We review the operational experience during the 2012 run, in particular the actions taken to ensure shifters receive manageable and relevant content from the main log stream. Finally, we present the performance of this log system, and future evolutions.

  1. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Carpeño, A., E-mail: antonio.cruiz@upm.es [Universidad Politécnica de Madrid UPM, Madrid (Spain); Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S. [Universidad Politécnica de Madrid UPM, Madrid (Spain); Vega, J.; Castro, R. [Laboratorio Nacional de Fusión CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  2. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    International Nuclear Information System (INIS)

    Carpeño, A.; Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S.; Vega, J.; Castro, R.

    2016-01-01

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  3. DZERO Level 3 DAQ/Trigger Closeout

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Tevatron Collider, located at the Fermi National Accelerator Laboratory, delivered its last 1.96 TeV proton-antiproton collisions on September 30th, 2011. The DZERO experiment continues to take cosmic data for final alignment for several more months . Since Run 2 started, in March 2001, all DZERO data has been collected by the DZERO Level 3 Trigger/DAQ System. The system is a modern, networked, commodity hardware trigger and data acquisition system based around a large central switch with about 60 front ends and 200 trigger computers. DZERO front end crates are VME based. Single Board Computer interfaces between detector data on VME and the network transport for the DAQ system. Event flow is controlled by the Routing Master which can steer events to clusters of farm nodes based on the low level trigger bits that fired. The farm nodes are multi-core commodity computer boxes, without special hardware, that run isolated software to make the final Level 3 trigger decision. Passed events are transferred to th...

  4. Flexible custom designs for CMS DAQ

    CERN Document Server

    Arcidiacono, Roberta; Boyer, Vincent; Brett, Angela Mary; Cano, Eric; Carboni, Andrea; Ciganek, Marek; Cittolin, Sergio; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino Garrido, Robert; Gulmini, Michele; Gutleber, Johannes; Jacobs, Claude; Maron, Gaetano; Meijers, Frans; Meschi, Emilio; Murray, Steven John; Oh, Alexander; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Piedra Gomez, Jonatan; Pieri, Marco; Pollet, Lucien; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Sumorok, Konstanty; Suzuki, Ichiro; Tsirigkas, Dimitrios; Varela, Joao

    2006-01-01

    The CMS central DAQ system is built using commercial hardware (PCs and networking equipment), except for two components: the Front-end Readout Link (FRL) and the Fast Merger Module (FMM). The FRL interfaces the sub-detector specific front-end electronics to the central DAQ system in a uniform way. The FRL is a compact-PCI module with an additional PCI 64bit connector to host a Network Interface Card (NIC). On the sub-detector side, the data are written to the link using a FIFO-like protocol (SLINK64). The link uses the Low Voltage Differential Signal (LVDS) technology to transfer data with a throughput of up to 400 MBytes/s. The FMM modules collect status signals from the front-end electronics of the sub-detectors, merge and monitor them and provide the resulting signals with low latency to the first level trigger electronics. In particular, the throttling signals allow the trigger to avoid buffer overflows and data corruption in the front-end electronics when the data produced in the front-end exceeds the c...

  5. Applications of CORBA in the ATLAS prototype DAQ

    Science.gov (United States)

    Jones, R.; Kolos, S.; Mapelli, L.; Ryabov, Y.

    2000-04-01

    This paper presents the experience of using the Common Object Request Broker Architecture (CORBA) in the ATLAS prototype DAQ project. Many communication links in the DAQ system have been designed and implemented using the CORBA standard. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA based communications between DAQ components in a local area network (LAN) of heterogeneous computers. The CORBA Naming Service provides the principal mechanism through which most clients of an ORE-based system locate objects that they intend to use. In our project, conventions are employed that meaningfully partition the name space of the Naming Service according to divisions in the DAQ system itself. The Inter Process Communication (IPC) package, implemented in C++ on the top of CORBA/ILU, incorporates this facility and hides the details of the naming schema is described. The development procedure and environment for remote database access using IPC is described. Various end-user interfaces have been implemented using the Java language that communicate with C++ servers via CORBA/ILU. To support such interfaces, a second implementation of IPC in Java has been developed. The design and implementation of such connections are described. An alternative CORBA implementation, ORBacus, has been evaluated and compared with ILU.

  6. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  7. Use of High Resolution DAQ System to Aid Diagnosis of HD2b, a High Performance Nb3Sn Dipole

    Energy Technology Data Exchange (ETDEWEB)

    Lizarazo, J.; Doering, D.; Doolittle, L.; Galvin, J.; Caspi, S.; Dietderich, D. R.; Felice, H.; Ferracin, P.; Godeke, A.; Joseph, J.; Lietzke, A. F.; Ratti, A.; Sabbi, G. L.; Trillaud, F.; Wang, X.; Zimmerman, S.

    2008-08-17

    A novel voltage monitoring system to record voltage transients in superconducting magnets is being developed at LBNL. This system has 160 monitoring channels capable of measuring differential voltages of up to 1.5kV with 100kHz bandwidth and 500kS/s digitizing rate. This paper presents analysis results from data taken with a 16 channel prototype system. From that analysis we were able to diagnose a change in the current-temperature margin of the superconducting cable by analyzing Flux-Jump data collected after a magnet energy extraction failure during testing of a high field Nb{sub 3}Sn dipole.

  8. Use of High Resolution DAQ System to Aid Diagnosis of HD2b, a High Performance Nb3Sn Dipole

    International Nuclear Information System (INIS)

    Lizarazo, J.; Doering, D.; Doolittle, L.; Galvin, J.; Caspi, S.; Dietderich, D.R.; Felice, H.; Ferracin, P.; Godeke, A.; Joseph, J.; Lietzke, A.F.; Ratti, A.; Sabbi, G.L.; Trillaud, F.; Wang, X.; Zimmerman, S.

    2008-01-01

    A novel voltage monitoring system to record voltage transients in superconducting magnets is being developed at LBNL. This system has 160 monitoring channels capable of measuring differential voltages of up to 1.5kV with 100kHz bandwidth and 500kS/s digitizing rate. This paper presents analysis results from data taken with a 16 channel prototype system. From that analysis we were able to diagnose a change in the current-temperature margin of the superconducting cable by analyzing Flux-Jump data collected after a magnet energy extraction failure during testing of a high field Nb 3 Sn dipole.

  9. Design of the ANTARES LCM-DAQ board test bench using a FPGA-based system-on-chip approach

    Energy Technology Data Exchange (ETDEWEB)

    Anvar, S. [CEA Saclay, DAPNIA/SEDI, 91191 Gif-sur-Yvette Cedex (France); Kestener, P. [CEA Saclay, DAPNIA/SEDI, 91191 Gif-sur-Yvette Cedex (France)]. E-mail: pierre.kestener@cea.fr; Le Provost, H. [CEA Saclay, DAPNIA/SEDI, 91191 Gif-sur-Yvette Cedex (France)

    2006-11-15

    The System-on-Chip (SoC) approach consists in using state-of-the-art FPGA devices with embedded RISC processor cores, high-speed differential LVDS links and ready-to-use multi-gigabit transceivers allowing development of compact systems with substantial number of IO channels. Required performances are obtained through a subtle separation of tasks between closely cooperating programmable hardware logic and user-friendly software environment. We report about our experience in using the SoC approach for designing the production test bench of the off-shore readout system for the ANTARES neutrino experiment.

  10. Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m 2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  11. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    Science.gov (United States)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  12. DAQ

    CERN Multimedia

    F. Meijers

    2012-01-01

      Preparations for the 2012 physics run The HLT farm currently comprises 720 PC nodes with dual E5430 4-core CPUs (installed in 2009) and 288 PC nodes with dual X5650 6-core CPUs (installed in early 2011). This gives a total HLT capacity of 9216 cores and 18 TB of memory. It provides a capacity for HLT of about 100 ms/event (on a 2.7 GHz E5430 core) at 100 kHz L1 rate in pp collisions. In order to be able to handle the expected higher instantaneous luminosities in 2012 (up to 7E33 at 50 ns bunch spacing) with a pile-up of ~35 events, a further extension of the HLT is necessary. This extension aims at a capacity of about 150 ms/event. The 2012 extension will consist of 256 nodes dual 8-core CPUs of the new ‘Sandy-Bridge’ architecture and is foreseen to be ready for deployment after the first LHC MD period (end April). In order to connect the new PC nodes to the existing data network switches, the event builder network has been re-cabled (see Image 3) to reduce the number of dat...

  13. Distributed inter process communication framework of BES III DAQ online software

    International Nuclear Information System (INIS)

    Li Fei; Liu Yingjie; Ren Zhenyu; Wang Liang; Chinese Academy of Sciences, Beijing; Chen Mali; Zhu Kejun; Zhao Jingwei

    2006-01-01

    DAQ (Data Acquisition) system is one important part of BES III, which is the large scale high-energy physics detector on the BEPC. The inter process communication (IPC) of online software in distributed environments is very pivotal for design and implement of DAQ system. This article will introduce one distributed inter process communication framework, which is based on CORBA and used in BES III DAQ online software. The article mainly presents the design and implementation of the IPC framework and application based on IPC. (authors)

  14. Design of a mutual authentication based on NTRUsign with a perturbation and inherent multipoint control protocol frames in an Ethernet-based passive optical network

    Science.gov (United States)

    Yin, Aihan; Ding, Yisheng

    2014-11-01

    Identity-related security issues inherently present in passive optical networks (PON) still exist in the current (1G) and next-generation (10G) Ethernet-based passive optical network (EPON) systems. We propose a mutual authentication scheme that integrates an NTRUsign digital signature algorithm with inherent multipoint control protocol (MPCP) frames over an EPON system between the optical line terminal (OLT) and optical network unit (ONU). Here, a primitive NTRUsign algorithm is significantly modified through the use of a new perturbation so that it can be effectively used for simultaneously completing signature and authentication functions on the OLT and the ONU sides. Also, in order to transmit their individual sensitive messages, which include public key, signature, and random value and so forth, to each other, we redefine three unique frames according to MPCP format frame. These generated messages can be added into the frames and delivered to each other, allowing the OLT and the ONU to go ahead with a mutual identity authentication process to verify their legal identities. Our simulation results show that this proposed scheme performs very well in resisting security attacks and has low influence on the registration efficiency to to-be-registered ONUs. A performance comparison with traditional authentication algorithms is also presented. To the best of our knowledge, no detailed design of mutual authentication in EPON can be found in the literature up to now.

  15. LabVIEW DAQ for NE213 Neutron Detector

    International Nuclear Information System (INIS)

    Al-Adeeb, Mohammed

    2003-01-01

    A neutron spectroscopy system, based on a NE213 liquid scintillation detector, to be placed at the Stanford Linear Accelerator Center to measure neutron spectra from a few MeV up to 800 MeV, beyond shielding. The NE213 scintillator, coupled with a Photomultiplier Tube (PMT), detects and converts radiation into current for signal processing. Signals are processed through Nuclear Instrument Modules (NIM) and Computer Automated Measurement and Control (CAMAC) modules. CAMAC is a computer automated data acquisition and handling system. Pulses are properly prepared and fed into an analog to digital converter (ADC), a standard CAMAC module. The ADC classifies the incoming analog pulses into 1 of 2048 digital channels. Data acquisition (DAQ) software based on LabVIEW, version 7.0, acquires and organizes data from the CAMAC ADC. The DAQ system presents a spectrum showing a relationship between pulse events and respective charge (digital channel number). Various photon sources, such as Co-60, Y-88, and AmBe-241, are used to calibrate the NE213 detector. For each source, a Compton edge and reference energy [units of MeVee] is obtained. A complete calibration curve results (at a given applied voltage to the PMT and pre-amplification gain) when the Compton edge and reference energy for each source is plotted. This project is focused to development of a DAQ system and control setup to collect and process information from a NE213 liquid scintillation detector. A manual is created to document the process of the development and interpretation of the LabVIEW-based DAQ system. Future high-energy neutron measurements can be referenced and normalized according to this calibration curve

  16. The D0 online monitoring and automatic DAQ recovery

    International Nuclear Information System (INIS)

    Haas, A.

    2004-01-01

    The DZERO experiment, located at the Fermi National Accelerator Laboratory, has recently started the Run 2 physics program. The detector upgrade included a new Data Acquisition/Level 3 Trigger system. Part of the design for the DAQ/Trigger system was a new monitoring infrastructure. The monitoring was designed to satisfy real-time requirements with 1-second resolution as well as nonreal-time data. It was also designed to handle a large number of displays without putting undue load on the sources of monitoring information. The resulting protocol is based on XML, is easily extensible, and has spawned a large number of displays, clients, and other applications. It is also one of the few sources of detector performance available outside the Online System's security wall. A tool, based on this system, which provides for auto-recovery of DAQ errors, has been designed. This talk will include a description of the DZERO DAQ/Online monitor server, based on the ACE framework, the protocol, the auto-recovery tool, and several of the unique displays which include an ORACLE-based archiver and numerous GUIs

  17. LAND/R3B DAQ developments

    Energy Technology Data Exchange (ETDEWEB)

    Toernqvist, Hans; Aumann, Thomas; Loeher, Bastian [Technische Universitaet Darmstadt, Darmstadt (Germany); Simon, Haik [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Johansson, Haakan [Chalmers Institute of Technology, Goeteborg (Sweden); Collaboration: R3B-Collaboration

    2015-07-01

    Existing experimental setups aim to exploit most of the improved capabilities and specifications of the upcoming FAIR facility at GSI. Their DAQ designs will require some re-evaluation and upgrades. This presentation summarizes the R3B experimental campaigns in 2014, where the R3B DAQ was subject to testing of several new features that will aid researchers in using larger and more complicated experimental setups in the future. It also acted as part of a small testing ground for the NUSTAR DAQ infrastructure. In order to allow to extract correlations between several experimental sites, new suggested triggering and timestamping implementations were tested over significant distances. Also, with growing experimental complexity comes a greater risk of problems that may be difficult to characterize and solve. To this end, essential remote monitoring and debugging tools have been used successfully.

  18. A TCP/IP framework for ethernet-based measurement, control and experiment data distribution

    Science.gov (United States)

    Ocaya, R. O.; Minny, J.

    2010-11-01

    A complete modular but scalable TCP/IP based scientific instrument control and data distribution system has been designed and realized. The system features an IEEE 802.3 compliant 10 Mbps Medium Access Controller (MAC) and Physical Layer Device that is suitable for the full-duplex monitoring and control of various physically widespread measurement transducers in the presence of a local network infrastructure. The cumbersomeness of exchanging and synchronizing data between the various transducer units using physical storage media led to the choice of TCP/IP as a logical alternative. The system and methods developed are scalable for broader usage over the Internet. The system comprises a PIC18f2620 and ENC28j60 based hardware and a software component written in C, Java/Javascript and Visual Basic.NET programming languages for event-level monitoring and browser user-interfaces respectively. The system exchanges data with the host network through IPv4 packets requested and received on a HTTP page. It also responds to ICMP echo, UDP and ARP requests through a user selectable integrated DHCP and static IPv4 address allocation scheme. The round-trip time, throughput and polling frequency are estimated and reported. A typical application to temperature monitoring and logging is also presented.

  19. Development of Ethernet Based Remote Monitoring and Controlling of MST Radar Transmitters using ARM Cortex Microcontroller

    Directory of Open Access Journals (Sweden)

    Lakshmi Narayana ROSHANNA

    2013-01-01

    Full Text Available The recently emerging Web Services technology has provided a new and excellent solution to Industrial Automation in online control and remote monitoring. In this paper, a Web Service Based Remote Monitoring & Controlling of Radar Transmitters for safety management (WMCT developed for MST Radar is described. It achieved the MST radar transmitters’ remote supervisory, data logging and controlling activities. The system is developed using an ARM Cortex M3 processor to monitor and control the 32 triode-based transmitters of the 53-MHz Radar. The system controls transmitters via the internet using an Ethernet client server and store health status in the Database for radar performance analysis. The system enables scientists to operate and control the radar transmitters from a remote client machine Webpage.

  20. High performance message passing for the ATLAS DAQ/EF-1 project

    CERN Document Server

    Mornacchi, Giuseppe

    1999-01-01

    Summary form only. A message passing library has been developed in the context of the ATLAS DAQ/EF-1 project. It is used for time critical applications within the front-end part of the DAQ system, mainly to exchange data control messages between I/O processors. Key objectives of the design were low message overheads, efficient use of the data transfer buses, provision of broadcast functionality and a hardware and operating system independent implementation of the application interface. The design and implementation of the message passing library are presented. As required by the project, the implementation is based on commercial components, namely VMEbus, PCI, the Lynx-OS real-time operating system and an additional inter- processor link, PVIC. The latter offers broadcast functionality identified as being important to the overall performance of the message passing. In addition, performance benchmarks for all implementing buses are presented for both simple test programs and the full DAQ applications. (0 refs)...

  1. Measurement of Z boson production in association with jets at the LHC and study of a DAQ system for the Triple-GEM detector in view of the CMS upgrade

    CERN Document Server

    Léonard, Alexandre

    This PhD thesis presents the measurement of the differential cross section for the production of a Z boson in association with jets in proton-proton collisions taking place at the Large Hadron Collider (LHC) at CERN, at a centre-of-mass energy of 8 TeV. A development of a data acquisition (DAQ) system for the Triple-Gas Electron Multiplier (GEM) detector in view of the Compact Muon Solenoid (CMS) detector upgrade is also presented. The events used for the data analysis were collected by the CMS detector during the year 2012 and constitute a sample of 19.6/fb of integrated luminosity. The cross section measurements are performed as a function of the jet multiplicity, the jet transverse momentum and pseudorapidity, and the scalar sum of the jet transverse momenta. The results were obtained by correcting the observed distributions for detector effects. The measured differential cross sections are compared to some state of the art Monte Carlo predictions MadGraph 5, Sherpa 2 and MadGraph5_aMC@NLO. These measureme...

  2. Front-end DAQ strategy and implementation for the KLOE-2 experiment

    Science.gov (United States)

    Branchini, P.; Budano, A.; Balla, A.; Beretta, M.; Ciambrone, P.; De Lucia, E.; D'Uffizi, A.; Marciniewski, P.

    2013-04-01

    A new front-end data acquisition (DAQ) system has been conceived for the data collection of the new detectors which will be installed by the KLOE2 collaboration. This system consists of a general purpose FPGA based DAQ module and a VME board hosting up to 16 optical links. The DAQ module has been built around a Virtex-4 FPGA and it is able to acquire up to 1024 different channels distributed over 16 front-end slave cards. Each module is a general interface board (GIB) which performs also first level data concentration tasks. The GIB has an optical interface, a RS-232, an USB and a Gigabit Ethernet Interface. The optical interface will be used for DAQ purposes while the Gigabit Ethernet interface for monitoring tasks and debug. Two new detectors exploit this strategy to collect data. Optical links are used to deliver data to the VME board which performs data concentration tasks. The return optical link from the board to the GIB is used to initialize the front-end cards. The VME interface of the module implements the VME 2eSST protocol in order to sustain a peak data rate of up to 320 MB/s. At the moment the system is working at the Frascati National Laboratory (LNF).

  3. Front-end DAQ strategy and implementation for the KLOE-2 experiment

    International Nuclear Information System (INIS)

    Branchini, P; Budano, A; Balla, A; Beretta, M; Ciambrone, P; Lucia, E De; D'Uffizi, A; Marciniewski, P

    2013-01-01

    A new front-end data acquisition (DAQ) system has been conceived for the data collection of the new detectors which will be installed by the KLOE2 collaboration. This system consists of a general purpose FPGA based DAQ module and a VME board hosting up to 16 optical links. The DAQ module has been built around a Virtex-4 FPGA and it is able to acquire up to 1024 different channels distributed over 16 front-end slave cards. Each module is a general interface board (GIB) which performs also first level data concentration tasks. The GIB has an optical interface, a RS-232, an USB and a Gigabit Ethernet Interface. The optical interface will be used for DAQ purposes while the Gigabit Ethernet interface for monitoring tasks and debug. Two new detectors exploit this strategy to collect data. Optical links are used to deliver data to the VME board which performs data concentration tasks. The return optical link from the board to the GIB is used to initialize the front-end cards. The VME interface of the module implements the VME 2eSST protocol in order to sustain a peak data rate of up to 320 MB/s. At the moment the system is working at the Frascati National Laboratory (LNF).

  4. A Web 2.0 approach to DAQ monitoring and controlling

    Energy Technology Data Exchange (ETDEWEB)

    Penschuck, Manuel [Goethe-Universitaet, Frankfurt (Germany); Collaboration: TRB3-Collaboration

    2014-07-01

    In the scope of experimental set-ups for the upcoming FAIR experiments, a FPGA-based general purpose trigger and read-out board (TRB3) has been developed which is already in use in several detector set-ups (e.g. HADES, CBM-MVD, PANDA). For on- and off-board communication between the DAQ's subsystems, TrbNet, a specialised high-speed, low-latency network protocol developed for the DAQ system of the HADES detector, is used. Communication with any computer infrastructure is provided by Gigabit Ethernet. Monitoring and configuration of all DAQ systems and front-end electronics is consistently managed by the powerful slow-control features of TrbNet and supported by a flexible and mature software tool-chain, designed to meet the diverse requirements during development, setup phase and experiment. Most building blocks offer a graphical-user-interface (GUI) implemented using omnipresent web 2.0 technologies, which enable rapid prototyping, network transparent access and impose minimal software dependencies on the client's machine. This contribution will present the GUI-related features and infrastructure highlighting the multiple interfaces from the DAQ's slow-control to the client's web-browser.

  5. The HLT, DAQ and DCS TDR

    CERN Multimedia

    Wickens, F. J

    At the end of June the Trigger-DAQ community achieved a major milestone with the submission to the LHCC of the Technical Design Report (TDR) for DAQ, HLT and DCS. The first unbound copies were handed to the LHCC referees on the scheduled date of 30th June, this was followed a few days later by a limited print run which produced the first bound copies (see Figure 1). As had previously been announced both to the LHCC and the ATLAS Collaboration it was not possible on this timescale to give a complete validation of all of the aspects of the architecture in the TDR. So it had been agreed that further work would continue over the summer to provide more complete results for the formal review by the LHCC of the TDR in September. Thus there followed an intense programme of measurements and analysis: especially to provide results for HLT both in testbeds and for the event selection software itself; to provide additional information on scaling of the dataflow aspects; to provide first results on the new prototype ROBin...

  6. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  7. Editor for Remote Database used in ATLAS Trigger/DAQ

    CERN Document Server

    Meessen, C; Valenta, J

    2006-01-01

    The poster gives brief summary of the ATLAS T/DAQ system, then it introduces the RDB database and describes the RDB Editor application, including its internal structure, GUI features, etc. The RDB Editor is an easy-to-use Java application which allows simple navigation between huge number of objects stored in the RDB. It supports bookmarks, histories, etc. in the way usual in the web browsers. Moreover, it is possible to enhance the application by specialized (graphical) viewers for objects of particular class which will allow the user to see, for example, details that are hard to spot in textual view. As an example of such a plug-in, viewer for EFD_Configuration class was developed.

  8. H4DAQ: a modern and versatile data-acquisition package for calorimeter prototypes test-beams

    Science.gov (United States)

    Marini, A. C.

    2018-02-01

    The upgrade of the particle detectors for the HL-LHC or for future colliders requires an extensive program of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system, H4DAQ, was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted in various applications for the CMS experiment and AIDA project. Several calorimeter prototypes and precision timing detectors have used our system from 2014 to 2017. H4DAQ has proven to be a versatile application and has been ported to many other beam test environments. H4DAQ is fast, simple, modular and can be configured to support various kinds of setup. The functionalities of the DAQ core software are split into three configurable finite state machines: data readout, run control, and event builder. The distribution of information and data between the various computers is performed using ZEROMQ (0MQ) sockets. Plugins are available to read different types of hardware, including VME crates with many types of boards, PADE boards, custom front-end boards and beam instrumentation devices. The raw data are saved as ROOT files, using the CERN C++ ROOT libraries. A Graphical User Interface, based on the python gtk libraries, is used to operate the H4DAQ and an integrated data quality monitoring (DQM), written in C++, allows for fast processing of the events for quick feedback to the user. As the 0MQ libraries are also available for the National Instruments LabVIEW program, this environment can easily be integrated within H4DAQ applications.

  9. The 40 MHz trigger-less DAQ for the LHCb Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Campora Perez, D.H. [INFN CNAF, Bologna (Italy); Falabella, A., E-mail: antonio.falabella@cnaf.infn.it [CERN, Geneva (Switzerland); Galli, D. [INFN Sezione di Bologna, Bologna (Italy); Università Bologna, Bologna (Italy); Giacomini, F. [CERN, Geneva (Switzerland); Gligorov, V. [INFN CNAF, Bologna (Italy); Manzali, M. [Università Bologna, Bologna (Italy); Università Ferrara, Ferrara (Italy); Marconi, U. [INFN Sezione di Bologna, Bologna (Italy); Neufeld, N.; Otto, A. [INFN CNAF, Bologna (Italy); Pisani, F. [INFN CNAF, Bologna (Italy); Università la Sapienza, Roma (Italy); Vagnoni, V.M. [INFN Sezione di Bologna, Bologna (Italy)

    2016-07-11

    The LHCb experiment will undergo a major upgrade during the second long shutdown (2018–2019), aiming to let LHCb collect an order of magnitude more data with respect to Run 1 and Run 2. The maximum readout rate of 1 MHz is the main limitation of the present LHCb trigger. The upgraded detector, apart from major detector upgrades, foresees a full read-out, running at the LHC bunch crossing frequency of 40 MHz, using an entirely software based trigger. A new high-throughput PCIe Generation 3 based read-out board, named PCIe40, has been designed for this purpose. The read-out board will allow an efficient and cost-effective implementation of the DAQ system by means of high-speed PC networks. The network-based DAQ system reads data fragments, performs the event building, and transports events to the High-Level Trigger at an estimated aggregate rate of about 32 Tbit/s. Different architecture for the DAQ can be implemented, such as push, pull and traffic shaping with barrel-shifter. Possible technology candidates for the foreseen event-builder under study are InfiniBand and Gigabit Ethernet. In order to define the best implementation of the event-builder we are performing tests of the event-builder on different platforms with different technologies. For testing we are using an event-builder evaluator, which consists of a flexible software implementation, to be used on small size test beds as well as on HPC scale facilities. The architecture of DAQ system and up to date performance results will be presented.

  10. Measurement Of Neutron Radius In Lead By Parity Violating Scattering Flash ADC DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Zafar [Christopher Newport Univ., Newport News, VA (United States)

    2012-06-01

    This dissertation reports the experiment PREx, a parity violation experiment which is designed to measure the neutron radius in 208Pb. PREx is performed in hall A of Thomas Jefferson National Accelerator Facility from March 19th to June 21st. Longitudionally polarized electrons at energy 1 GeV scattered at and angle of θlab = 5.8 ° from the Lead target. Beam corrected pairty violaing counting rate asymmetry is (Acorr= 594 ± 50(stat) ± 9(syst))ppb at Q2 = 0.009068GeV 2. This dissertation also presents the details of Flash ADC Data Acquisition(FADC DAQ) system for Moller polarimetry in Hall A of Thomas Jefferson National Accelerator Facility. The Moller polarimeter measures the beam polarization to high precision to meet the specification of the PREx(Lead radius experiment). The FADC DAQ is part of the upgrade of Moller polarimetery to reduce the systematic error for PREx. The hardware setup and the results of the FADC DAQ analysis are presented

  11. A modern and versatile data-acquisition package for calorimeter prototypes test-beams H4DAQ

    CERN Document Server

    Marini, Andrea Carlo

    2017-01-01

    The upgrade of the calorimeters for the HL-LHC or for future colliders requires an extensive programme of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system (called H4DAQ) was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted by an increasing number of teams involved in the CMS experiment and AIDA groups. Several different calorimeter prototypes and precision timing detectors have used H4DAQ from 2014 to 2017, and it has proved to be a versatile application, portable to many other beam test environments (the CERN beam lines EA-T9 at the PS, H2 and H4 at the SPS, and at the INFN Frascati Beam Test Facility).The H4DAQ is fast, simple, modular and can be configured to support different setups. The different functionalities of the DAQ core software are split into three configurable finite state machines the data readout, run control, and event builder. The distribution of information and data betw...

  12. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    International Nuclear Information System (INIS)

    Yasu, Y.; Fujii, H.; Nomachi, M.; Kodama, H.; Inoue, E.; Tajima, Y.; Takeuchi, Y.; Shimizu, Y.

    1994-01-01

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers

  13. Performance Comparison of 112-Gb/s DMT, Nyquist PAM4, and Partial-Response PAM4 for Future 5G Ethernet-Based Fronthaul Architecture

    Science.gov (United States)

    Eiselt, Nicklas; Muench, Daniel; Dochhan, Annika; Griesser, Helmut; Eiselt, Michael; Olmos, Juan Jose Vegas; Monroy, Idelfonso Tafur; Elbers, Joerg-Peter

    2018-05-01

    For a future 5G Ethernet-based fronthaul architecture, 100G trunk lines of a transmission distance up to 10 km standard single mode fiber (SSMF) in combination with cheap grey optics to daisy chain cell site network interfaces are a promising cost- and power-efficient solution. For such a scenario, different intensity modulation and direct detect (IMDD) Formats at a data rate of 112 Gb/s, namely Nyquist four-level pulse amplitude modulation (PAM4), discrete multi-tone Transmission (DMT) and partial-response (PR) PAM4 are experimentally investigated, using a low-cost electro-absorption modulated laser (EML), a 25G driver and current state-of-the-art high Speed 84 GS/s CMOS digital-to-analog converter (DAC) and analog-to-digital converter (ADC) test chips. Each modulation Format is optimized independently for the desired scenario and their digital signal processing (DSP) requirements are investigated. The performance of Nyquist PAM4 and PR PAM4 depend very much on the efficiency of pre- and post-equalization. We show the necessity for at least 11 FFE-taps for pre-emphasis and up to 41 FFE coefficients at the receiver side. In addition, PR PAM4 requires an MLSE with four states to decode the signal back to a PAM4 signal. On the contrary, bit- and power-loading (BL, PL) is crucial for DMT and an FFT length of at least 512 is necessary. With optimized parameters, all Modulation formats result in a very similar performances, demonstrating a transmission distance of up to 10 km over SSMF with bit error rates (BERs) below a FEC threshold of 4.4E-3, allowing error free transmission.

  14. A rule-based verification and control framework in ATLAS Trigger-DAQ

    CERN Document Server

    Kazarov, A; Lehmann-Miotto, G; Sloper, J E; Ryabov, Yu; Computing In High Energy and Nuclear Physics

    2007-01-01

    In order to meet the requirements of ATLAS data taking, the ATLAS Trigger-DAQ system is composed of O(1000) of applications running on more than 2600 computers in a network. With such system size, s/w and h/w failures are quite often. To minimize system downtime, the Trigger-DAQ control system shall include advanced verification and diagnostics facilities. The operator should use tests and expertise of the TDAQ and detectors developers in order to diagnose and recover from errors, if possible automatically. The TDAQ control system is built as a distributed tree of controllers, where behavior of each controller is defined in a rule-based language allowing easy customization. The control system also includes verification framework which allow users to develop and configure tests for any component in the system with different levels of complexity. It can be used as a stand-alone test facility for a small detector installation, as part of the general TDAQ initialization procedure, and for diagnosing the problems ...

  15. Implementation of CMS Central DAQ monitoring services in Node.js

    CERN Document Server

    Vougioukas, Michail

    2015-01-01

    This report summarizes my contribution to the CMS Central DAQ monitoring system, in my capacity as a CERN Summer Students Programme participant, from June to September 2015. Specifically, my work was focused on rewriting – from Apache/PHP to Node.js/Javascript - and optimizing real-time monitoring web services (mostly Elasticsearch-based but also some Oracle-based) for the CMS Data Acquisition (Run II Filterfarm). Moreover, it included an implementation of web server caching, for better scalability when simultaneous web clients use the services. Measurements confirmed that the software developed during this project has indeed a potential to provide scalable services.

  16. Jet energy measurements at ILC. Calorimeter DAQ requirements and application in Higgs boson mass measurements

    International Nuclear Information System (INIS)

    Ebrahimi, Aliakbar

    2017-11-01

    required for the Higgs boson mass measurement can only be achieved using the particle flow approach to reconstruction. The particle flow approach requires highly-granular calorimeters and a highly efficient tracking system. The CALICE collaboration is developing highly-granular calorimeters for such applications. One of the challenges in the development of such calorimeters with millions of read-out channels is their Data Acquisition System (DAQ) system. The second part of this thesis involves contributions to development of a new DAQ system for the CALICE scintillator calorimeters. The new DAQ system fulfills the requirements for the prototypes tests while being scalable to larger systems. The requirements and general architecture of the DAQ system is outlined in this thesis. The new DAQ system has been commissioned and tested with particle beams at the CERN Proton Synchrotron test beam facility in 2014,results of which are presented here.

  17. Jet energy measurements at ILC. Calorimeter DAQ requirements and application in Higgs boson mass measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimi, Aliakbar

    2017-11-15

    jet energy resolution required for the Higgs boson mass measurement can only be achieved using the particle flow approach to reconstruction. The particle flow approach requires highly-granular calorimeters and a highly efficient tracking system. The CALICE collaboration is developing highly-granular calorimeters for such applications. One of the challenges in the development of such calorimeters with millions of read-out channels is their Data Acquisition System (DAQ) system. The second part of this thesis involves contributions to development of a new DAQ system for the CALICE scintillator calorimeters. The new DAQ system fulfills the requirements for the prototypes tests while being scalable to larger systems. The requirements and general architecture of the DAQ system is outlined in this thesis. The new DAQ system has been commissioned and tested with particle beams at the CERN Proton Synchrotron test beam facility in 2014,results of which are presented here.

  18. Overview and future developments of the FPGA-based DAQ of COMPASS

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yunpeng; Huber, Stefan; Konorov, Igor; Levit, Dmytro [Physik-Department E18, Technische Universitaet Muenchen (Germany); Bodlak, Martin [Department of Low-Temperature Physics, Charles University Prague (Czech Republic); Frolov, Vladimir [European Organization for Nuclear Research - CERN (Switzerland); Jary, Vladimir; Virius, Miroslav [Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University (Czech Republic); Novy, Josef [European Organization for Nuclear Research - CERN (Switzerland); Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University (Czech Republic); Steffen, Dominik [Physik-Department E18, Technische Universitaet Muenchen (Germany); European Organization for Nuclear Research - CERN (Switzerland)

    2016-07-01

    COMPASS is a fixed-target experiment at the SPS accelerator at CERN dedicated to the study of hadron structure and spectroscopy. In 2014, an FPGA-based data acquisition system (FDAQ) was deployed. Its hardware event builder consisting of nine custom designed FPGA-cards replaced 30 distributed online computers and around 100 PCI cards. As a result, the new DAQ provides higher bandwidth and better reliability. By buffering the data, the system exploits the spill structure of the SPS averaging the maximum on-spill data rate of 1.5 GB/s over the whole SPS duty cycle. A modern run control software allows user-friendly monitoring and configuration of the hardware nodes of the event builder. From 2016, it is planned to wire all point-to-point high-speed links via a fully programmable crosspoint switch. The crosspoint switch will provide a fully customizable DAQ network topology between front-end electronics, the event building hardware, and the readout computers. It will therefore simplify compensation for hardware failure and improve load balancing.

  19. A TCP/IP transport layer for the DAQ of the CMS experiment

    International Nuclear Information System (INIS)

    Kozlovszky, M.

    2004-01-01

    The CMS collaboration is currently investigating various networking technologies that may meet the requirements of the CMS Data Acquisition System (DAQ). During this study, a peer transport component based on TCP/IP has been developed using object-oriented techniques for the distributed DAQ framework named XDAQ. This framework has been designed to facilitate the development of distributed data acquisition systems within the CMS Experiment. The peer transport component has to meet 3 main requirements. Firstly, it had to provide fair access to the communication medium for competing applications. Secondly, it had to provide as much of the available bandwidth to the application layer as possible. Finally, it had to hide the complexity of using non-blocking TCP/IP connections from the application layer. This paper describes the development of the peer transport component and then presents and draws conclusions on the measurements made during tests. The major topics investigated include: blocking versus non-blocking communication, TCP/IP configuration options, multi-rail connections

  20. The DAQ needle in the big-data haystack

    Science.gov (United States)

    Meschi, E.

    2015-12-01

    In the last three decades, HEP experiments have faced the challenge of manipulating larger and larger masses of data from increasingly complex, heterogeneous detectors with millions and then tens of millions of electronic channels. LHC experiments abandoned the monolithic architectures of the nineties in favor of a distributed approach, leveraging the appearence of high speed switched networks developed for digital telecommunication and the internet, and the corresponding increase of memory bandwidth available in off-the-shelf consumer equipment. This led to a generation of experiments where custom electronics triggers, analysing coarser-granularity “fast” data, are confined to the first phase of selection, where predictable latency and real time processing for a modest initial rate reduction are “a necessary evil”. Ever more sophisticated algorithms are projected for use in HL- LHC upgrades, using tracker data in the low-level selection in high multiplicity environments, and requiring extremely complex data interconnects. These systems are quickly obsolete and inflexible but must nonetheless survive and be maintained across the extremely long life span of current detectors. New high-bandwidth bidirectional links could make high-speed low-power full readout at the crossing rate a possibility already in the next decade. At the same time, massively parallel and distributed analysis of unstructured data produced by loosely connected, “intelligent” sources has become ubiquitous in commercial applications, while the mass of persistent data produced by e.g. the LHC experiments has made multiple pass, systematic, end-to-end offline processing increasingly burdensome. A possible evolution of DAQ and trigger architectures could lead to detectors with extremely deep asynchronous or even virtual pipelines, where data streams from the various detector channels are analysed and indexed in situ quasi-real-time using intelligent, pattern-driven data organization, and

  1. The DAQ needle in the big-data haystack

    International Nuclear Information System (INIS)

    Meschi, E

    2015-01-01

    In the last three decades, HEP experiments have faced the challenge of manipulating larger and larger masses of data from increasingly complex, heterogeneous detectors with millions and then tens of millions of electronic channels. LHC experiments abandoned the monolithic architectures of the nineties in favor of a distributed approach, leveraging the appearence of high speed switched networks developed for digital telecommunication and the internet, and the corresponding increase of memory bandwidth available in off-the-shelf consumer equipment. This led to a generation of experiments where custom electronics triggers, analysing coarser-granularity “fast” data, are confined to the first phase of selection, where predictable latency and real time processing for a modest initial rate reduction are “a necessary evil”. Ever more sophisticated algorithms are projected for use in HL- LHC upgrades, using tracker data in the low-level selection in high multiplicity environments, and requiring extremely complex data interconnects. These systems are quickly obsolete and inflexible but must nonetheless survive and be maintained across the extremely long life span of current detectors.New high-bandwidth bidirectional links could make high-speed low-power full readout at the crossing rate a possibility already in the next decade. At the same time, massively parallel and distributed analysis of unstructured data produced by loosely connected, “intelligent” sources has become ubiquitous in commercial applications, while the mass of persistent data produced by e.g. the LHC experiments has made multiple pass, systematic, end-to-end offline processing increasingly burdensome.A possible evolution of DAQ and trigger architectures could lead to detectors with extremely deep asynchronous or even virtual pipelines, where data streams from the various detector channels are analysed and indexed in situ quasi-real-time using intelligent, pattern-driven data organization, and

  2. LHC detectors trigger/DAQ at LHC

    CERN Document Server

    Sphicas, Paris

    1998-01-01

    At its design luminosity, the LHC will deliver hundreds of millions of proton-proton interactions per second. Storage and computing limitations limit the number of physics events that can be recorded to about 100 per second. The selection will be carried out by the Trigger and data acquisition systems of the experiments. This lecture will review the requirements, architectures and various designs currently considered.

  3. LHCb: Improvements in the LHCb DAQ

    CERN Multimedia

    Campora, D; Schwemmer, R

    2014-01-01

    The LHCb data acquisition system is realized as a Gigabit Ethernet local area network with more than 330 FPGA driven data-sources, two core-routers, 56 fan-out switches and more than 1400 servers (will be upgraded to about 1800 soon). In total there are almost 3000 switch-ports. Data are pushed top-down, quasi-synchronously using n unreliable datagram protocol (like UDP).

  4. Readout Unit-FPGA version for link multipexers, DAQ and VELO trigger

    CERN Document Server

    Müller, H; Guirao, A; Bal, F

    2003-01-01

    The FPGA-based Readout Unit (RU) was designed as entry stage to the readout networks of the LHCb data acquisition and L1-VELO topology trigger systems. The RU performs subevent building from up to 16 custom S-link inputs towards a commercial readout network via a PCI interface card. For output to custom links, as required in datalink multiplexer applications, an output S-link transmitter interface is alternatively available. Baseline readout networks for the RU are intelligent Gbit-ethernet NIC cards for the DAQ system and SCI shared memory network for the L1-VELO system. Any new protocols, like 10Gbit ethernet or Infiniband may be adopted as far as proper PCI interfaces and Linux device drivers will become available. The two baseline RU modes of operation are: 1.) link-multiplexer with N*Slink to single-Slink 2.) eventbuilder interface with quad Slink-to-PCI network interface.

  5. Developments and applications of DAQ framework DABC v2

    International Nuclear Information System (INIS)

    Adamczewski-Musch, J; Kurz, N; Linev, S

    2015-01-01

    The Data Acquisition Backbone Core (DABC) is a software framework for distributed data acquisition. In 2013 Version 2 of DABC has been released with several improvements. For monitoring and control, an HTTP web server and a proprietary command channel socket have been provided. Web browser GUIs have been implemented for configuration and control of DABC and MBS DAQ nodes via such HTTP server. Several specific plug-ins, for example interfacing PEXOR/KINPEX optical readout PCIe boards, or HADES trbnet input and hld file output, have been further developed. In 2014, DABC v2 was applied for production data taking of the HADES collaboration's pion beam time at GSI. It fully replaced the functionality of the previous event builder software and added new features concerning online monitoring. (paper)

  6. CMS DAQ current and future hardware upgrades up to post Long Shutdown 3 (LS3) times

    CERN Document Server

    Racz, Attila; Behrens, Ulf; Branson, James; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; da Silva Gomes, Diego; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Gladki, Maciej; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre; Janulis, Mindaugas; Lettrich, Michael; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius K; Morovic, Srecko; O'Dell, Vivian; Orn, Samuel Johan; Orsini, Luciano; Papakrivopoulos, Ioannis; Paus, Christoph; Petrova, Petia; Petrucci, Andrea; Pieri, Marco; Rabady, Dinyar; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Vazquez Velez, Cristina; Vougioukas, Michail; Zejdl, Petr

    2017-01-01

    Following the first LHC collisions seen and recorded by CMS in 2009, the DAQ hardware went through a major upgrade during LS1 (2013- 2014) and new detectors have been connected during 2015-2016 and 2016-2017 winter shutdowns. Now, LS2 (2019-2020) and LS3 (2024-mid 2026) are actively being prepared. This paper shows how CMS DAQ hardware has evolved from the beginning and will continue to evolve in order to meet the future challenges posed by High Luminosity LHC (HL-LHC) and the CMS detector evolution. In particular, post LS3 DAQ architectures are focused upon.

  7. In-beam experience with a highly granular DAQ and control network: TrbNet

    International Nuclear Information System (INIS)

    Michel, J; Korcyl, G; Maier, L; Traxler, M

    2013-01-01

    Virtually all Data Acquisition Systems (DAQ) for nuclear and particle physics experiments use a large number of Field Programmable Gate Arrays (FPGAs) for data transport and more complex tasks as pattern recognition and data reduction. All these FPGAs in a large system have to share a common state like a trigger number or an epoch counter to keep the system synchronized for a consistent event/epoch building. Additionally, the collected data has to be transported with high bandwidth, optionally via the ubiquitous Ethernet protocol. Furthermore, the FPGAs' internal states and configuration memories have to be accessed for control and monitoring purposes. Another requirement for a modern DAQ-network is the fault-tolerance for intermittent data errors in the form of automatic retransmission of faulty data. As FPGAs suffer from Single Event Effects when exposed to ionizing particles, the system has to deal with failing FPGAs. The TrbNet protocol was developed taking all these requirements into account. Three virtual channels are merged on one physical medium: The trigger/epoch information is transported with the highest priority. The data channel is second in the priority order, while the control channel is the last. Combined with a small frame size of 80 bit this guarantees a low latency data transport: A system with 100 front-ends can be built with a one-way latency of 2.2 us. The TrbNet-protocol was implemented in each of the 550 FPGAs of the HADES upgrade project and has been successfully used during the Au+Au campaign in April 2012. With 2⋅10 6 /s Au-ions and 3% interaction ratio the accepted trigger rate is 10 kHz while data is written to storage with 150 MBytes/s. Errors are reliably mitigated via the implemented retransmission of packets and auto-shut-down of individual links. TrbNet was also used for full monitoring of the FEE status. The network stack is written in VHDL and was successfully deployed on various Lattice and Xilinx devices. The TrbNet is also

  8. In-beam experience with a highly granular DAQ and control network: TrbNet

    Science.gov (United States)

    Michel, J.; Korcyl, G.; Maier, L.; Traxler, M.

    2013-02-01

    Virtually all Data Acquisition Systems (DAQ) for nuclear and particle physics experiments use a large number of Field Programmable Gate Arrays (FPGAs) for data transport and more complex tasks as pattern recognition and data reduction. All these FPGAs in a large system have to share a common state like a trigger number or an epoch counter to keep the system synchronized for a consistent event/epoch building. Additionally, the collected data has to be transported with high bandwidth, optionally via the ubiquitous Ethernet protocol. Furthermore, the FPGAs' internal states and configuration memories have to be accessed for control and monitoring purposes. Another requirement for a modern DAQ-network is the fault-tolerance for intermittent data errors in the form of automatic retransmission of faulty data. As FPGAs suffer from Single Event Effects when exposed to ionizing particles, the system has to deal with failing FPGAs. The TrbNet protocol was developed taking all these requirements into account. Three virtual channels are merged on one physical medium: The trigger/epoch information is transported with the highest priority. The data channel is second in the priority order, while the control channel is the last. Combined with a small frame size of 80 bit this guarantees a low latency data transport: A system with 100 front-ends can be built with a one-way latency of 2.2 us. The TrbNet-protocol was implemented in each of the 550 FPGAs of the HADES upgrade project and has been successfully used during the Au+Au campaign in April 2012. With 2ṡ106/s Au-ions and 3% interaction ratio the accepted trigger rate is 10 kHz while data is written to storage with 150 MBytes/s. Errors are reliably mitigated via the implemented retransmission of packets and auto-shut-down of individual links. TrbNet was also used for full monitoring of the FEE status. The network stack is written in VHDL and was successfully deployed on various Lattice and Xilinx devices. The TrbNet is also

  9. A high dynamic range data acquisition system for a solid-state electron electric dipole moment experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Jin; Kunkler, Brandon; Liu, Chen-Yu; Visser, Gerard [CEEM, Physics Department, Indiana University, Bloomington, Indiana 47408 (United States)

    2012-01-15

    We have built a high precision (24-bit) data acquisition (DAQ) system capable of simultaneously sampling eight input channels for the measurement of the electric dipole moment of the electron. The DAQ system consists of two main components: a master board for DAQ control and eight individual analog-to-digital converter (ADC) boards for signal processing. This custom DAQ system provides galvanic isolation of the ADC boards from each other and the master board using fiber optic communication to reduce the possibility of ground loop pickup and attain ultimate low levels of channel cross-talk. In this paper, we describe the implementation of the DAQ system and scrutinize its performance.

  10. Three Generations of FPGA DAQ Development for the ATLAS Pixel Detector

    CERN Document Server

    AUTHOR|(CDS)2091916; Hsu, Shih-Chieh; Hauck, Scott Alan

    The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) tracks a schedule of long physics runs, followed by periods of inactivity known as Long Shutdowns (LS). During these LS phases both the LHC, and the experiments around its ring, undergo maintenance and upgrades. For the LHC these upgrades improve their ability to create data for physicists; the more data the LHC can create the more opportunities there are for rare events to appear that physicists will be interested in. The experiments upgrade so they can record the data and ensure the event won’t be missed. Currently the LHC is in Run 2 having completed the first LS of three. This thesis focuses on the development of Field-Programmable Gate Array (FPGA)-based readout systems that span across three major tasks of the ATLAS Pixel data acquisition (DAQ) system. The evolution of Pixel DAQ’s Readout Driver (ROD) card is presented. Starting from improvements made to the new Insertable B-Layer (IBL) ROD design, which was part of t...

  11. The use of Ethernet in the DataFlow of the ATLAS Trigger & DAQ

    CERN Document Server

    Stancu, Stefan; Dobinson, Bob; Korcyl, Krzysztof; Knezo, Emil; CHEP 2003 Computing in High Energy Physics

    2003-01-01

    The article analyzes a proposed network topology for the ATLAS DAQ DataFlow, and identifies the Ethernet features required for a proper operation of the network: MAC address table size, switch performance in terms of throughput and latency, the use of Flow Control, Virtual LANs and Quality of Service. We investigate these features on some Ethernet switches, and conclude on their usefulness for the ATLAS DataFlow network

  12. An introduction to LAMPF data acquisition system introduce

    International Nuclear Information System (INIS)

    Fu Saihong

    1993-01-01

    LAMPF Data Acquisition Systems are divided into general DAQ system and advanced MEGA DAQ system. The construct and future plan of general system are described. The second stage trigger has been implemented at LAMPF using a commercially available workstation and VME interface. The implementation is described and measurements of data transfer speeds are presented

  13. An Introduction to ATLAS Pixel Detector DAQ and Calibration Software Based on a Year's Work at CERN for the Upgrade from 8 to 13 TeV

    CERN Document Server

    AUTHOR|(CDS)2094561

    An overview is presented of the ATLAS pixel detector Data Acquisition (DAQ) system obtained by the author during a year-long opportunity to work on calibration software for the 2015-16 Layer‑2 upgrade. It is hoped the document will function more generally as an easy entry point for future work on ATLAS pixel detector calibration systems. To begin with, the overall place of ATLAS pixel DAQ within the CERN Large Hadron Collider (LHC), the purpose of the Layer-2 upgrade and the fundamentals of pixel calibration are outlined. This is followed by a brief look at the high level structure and key features of the calibration software. The paper concludes by discussing some difficulties encountered in the upgrade project and how these led to unforeseen alternative enhancements, such as development of calibration “simulation” software allowing the soundness of the ongoing upgrade work to be verified while not all of the actual readout hardware was available for the most comprehensive testing.

  14. The design and realization of general high-speed RAIN100B DAQ module based on powerPC MPC5200B processor

    International Nuclear Information System (INIS)

    Xue Tao; Gong Guanghua; Shao Beibei

    2010-01-01

    In order to deal with the DAQ function of nuclear electronics, department of engineering physics of Tsinghua University design and realize a general, high-speed RAIN100B DAQ module based on Freescale's PowerPC MPC5200B processor.And the RAIN100B was used on GEM detector DAQ, it can reach up to 90Mbps data speed. The result is also presented and discussed. (authors)

  15. Verification and Diagnostics Framework in ATLAS Trigger/DAQ

    CERN Document Server

    Barczyk, M.; Caprini, M.; Da Silva Conceicao, J.; Dobson, M.; Flammer, J.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Soloviev, I.; Hart, R.; Amorim, A.; Klose, D.; Lima, J.; Pedro, J.; Wolters, H.; Badescu, E.; Alexandrov, I.; Kotov, V.; Mineev, M.; Ryabov, Yu.; Ryabov, Yu.

    2003-01-01

    Trigger and data acquisition (TDAQ) systems for modern HEP experiments are composed of thousands of hardware and software components depending on each other in a very complex manner. Typically, such systems are operated by non-expert shift operators, which are not aware of system functionality details. It is therefore necessary to help the operator to control the system and to minimize system down-time by providing knowledge-based facilities for automatic testing and verification of system components and also for error diagnostics and recovery. For this purpose, a verification and diagnostic framework was developed in the scope of ATLAS TDAQ. The verification functionality of the framework allows developers to configure simple low-level tests for any component in a TDAQ configuration. The test can be configured as one or more processes running on different hosts. The framework organizes tests in sequences, using knowledge about components hierarchy and dependencies, and allowing the operator to verify the fun...

  16. Performance of the HADES DAQ in Au+Au

    Energy Technology Data Exchange (ETDEWEB)

    Michel, Jan [Goethe Univ. Frankfurt am Main (Germany); Collaboration: HADES-Collaboration

    2013-07-01

    The High Acceptance DiElectron Spectrometer (HADES) is located at the SIS-18 accelerator at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt. In April 2012 a five-week experimental run using a 1.23 AGeV gold beam focused on a 15-fold segmented gold target was conducted. One major reason for this successful data taking was the upgraded data acquisition system. An optical network running a customized network protocol (TrbNet) connects the frontend modules with read-out nodes. Here the data stream is converted to Gigabit Ethernet packets which are subsequently transported to a server farm using commodity hardware. All electronic components are supervised using a new, web-based monitoring system making use of the inherent slow-control features of TrbNet. In total, the system comprises of 550 FPGA-based modules, 30 Gigabit Ethernet links, four multi-core servers and 150 TB of local disk storage. The whole system is able to record event data in heavy-ion collisions at rates of up to 30 kHz and 800 MByte/s. During the experiment, the mean rates were 8 kHz and 150 MByte/s respectively mainly due to detector constraints. As a result, 7.7 . 10{sup 9} events with a total volume of 140 TB were recorded throughout the run. In this contribution the set-up, performance figures and the slow-control concept are shown.

  17. Interfacing Detectors to Triggers And DAQ Electronics; TOPICAL

    International Nuclear Information System (INIS)

    Crosetto, Dario B.

    1999-01-01

    The complete design of the front-end electronics interfacing LHCb detectors, Level-0 trigger and higher levels of trigger with flexible configuration parameters has been made for (a) ASIC implementation, and (b) FPGA implementation. The importance of approaching designs in technology-independent form becomes essential with the actual rapid electronics evolution. Being able to constrain the entire design to a few types of replicated components: (a) the fully programmable 3D-Flow system, and (b) the configurable front-end circuit described in this article, provides even further advantages because only one or two types of components will need to migrate to the newer technologies. To base on today's technology the design of a system such as the LHCb project that is to begin working in 2006 is not cost-effective. The effort required to migrate to a higher-performance will, in that case, be almost equivalent to completely redesigning the architecture from scratch. The proposed technology independent design with the current configurable front-end module described in this article and the scalable 3D-Flow fully programmable system described elsewhere, based on the study of the evolution of electronics during the past few years and the forecasted advances in the years to come, aims to provide a technology-independent design which lends itself to any technology at any time. In this case, technology independence is based mainly on generic-HDL reusable code which allows a very rapid realization of the state-of-the-art circuits in terms of gate density, power dissipation, and clock frequency. The design of four trigger towers presently fits into an OR3T30 FPGA. Preliminary test results (provided in this paper) meet the functional requirements of LHCb and provide sufficient flexibility to introduce future changes. The complete system design is also provided along with the integration of the front-end design in the entire system and the cost and dimension of the electronics

  18. LHCb: Dynamically Adaptive Header Generator and Front-End Source Emulator for a 100 Gbps FPGA Based DAQ

    CERN Multimedia

    Srikanth, S

    2014-01-01

    The proposed upgrade for the LHCb experiment envisages a system of 500 Data sources each generating data at 100 Gbps, the acquisition and processing of which is a big challenge even for the current state of the art FPGAs. This requires an FPGA DAQ module that not only handles the data generated by the experiment but also is versatile enough to dynamically adapt to potential inadequacies of other components like the network and PCs. Such a module needs to maintain real time operation while at the same time maintaining system stability and overall data integrity. This also creates a need for a Front-end source Emulator capable of generating the various data patterns, that acts as a testbed to validate the functionality and performance of the Header Generator. The rest of the abstract briefly describes these modules and their implementation. The Header Generator is used to packetize the streaming data from the detectors before it is sent to the PCs for further processing. This is achieved by continuously scannin...

  19. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  20. DAQ cards for the Compact Muon Solenoid: a successful technology transfer case

    CERN Document Server

    Barone, M; Geralis, T; Mastroyiannopoulos, N; Tzamarias, S; Zachariadou, K; Tsoussis, L

    2002-01-01

    In this paper we give the description of a project accomplished by a collaboration of researchers, engineers and managers from a Greek medium-size company Hourdakis Electronics S.A and the research laboratories CERN in Geneva and DEMOKRITOS in Athens. The project involved the production of 22 input-output DAQ electronic modules to be used for R&D purposes in the Compact Muon Solenoid experiment of LHC at CERN. This project can be considered a successful technology transfer. (3 refs).

  1. Pixel DAQ and trigger for HL-LHC

    International Nuclear Information System (INIS)

    Morettini, P.

    2017-01-01

    The read-out is one of the challenges in the design of a pixel detector for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), that is expected to operate from 2026 at a leveled luminosity of 5 × 10 34  cm −2  s −1 . This is especially true if tracking information is needed in a low latency trigger system. The difficulties of a fast read-out will be reviewed, and possible strategies explained. The solutions that are being evaluated by the ATLAS and CMS collaborations for the upgrade of their trackers will be outlined and ideas on possible development beyond HL-LHC will be presented.

  2. LHCb: F.E.C. for DAQ networks

    CERN Multimedia

    Floros, G; Neufeld, N

    2014-01-01

    The demand for faster and more reliable networks is growing day by day both in commercial and scientific applications, driving many innovations in network protocols, fiber optics and network-controllers. Operating fast links on relatively inexpensive hardware is a very important challenging aspect of this. One important way to enable this is to provide the network with an existing mechanism of error correction, called Forward Error Correction (F.E.C.). Although error-correcting codes exist for over six decades and F.E.C. is applied in various projects, it is still not widespread in Ethernet networks. F.E.C. introduces a very cost effective way to expand the limits of any network based on micro-controllers synthesized on FPGAs, but it is provided only for specific applications, such as backplane systems. Most of the FPGA and/or IP core vendors either do not provide this feature on their Ethernet implementations or their F.E.C. implementations are based on Ethernet micro-controllers that have a different struct...

  3. DAQ: Software Architecture for Data Acquisition in Sounding Rockets

    Science.gov (United States)

    Ahmad, Mohammad; Tran, Thanh; Nichols, Heidi; Bowles-Martinez, Jessica N.

    2011-01-01

    A multithreaded software application was developed by Jet Propulsion Lab (JPL) to collect a set of correlated imagery, Inertial Measurement Unit (IMU) and GPS data for a Wallops Flight Facility (WFF) sounding rocket flight. The data set will be used to advance Terrain Relative Navigation (TRN) technology algorithms being researched at JPL. This paper describes the software architecture and the tests used to meet the timing and data rate requirements for the software used to collect the dataset. Also discussed are the challenges of using commercial off the shelf (COTS) flight hardware and open source software. This includes multiple Camera Link (C-link) based cameras, a Pentium-M based computer, and Linux Fedora 11 operating system. Additionally, the paper talks about the history of the software architecture's usage in other JPL projects and its applicability for future missions, such as cubesats, UAVs, and research planes/balloons. Also talked about will be the human aspect of project especially JPL's Phaeton program and the results of the launch.

  4. CMS DAQ Event Builder Based on Gigabit Ethernet

    CERN Document Server

    Bauer, G; Branson, J; Brett, A; Cano, E; Carboni, A; Ciganek, M; Cittolin, S; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition System is designed to build and filter events originating from 476 detector data sources at a maximum trigger rate of 100 KHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called FED Builders. These will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The second stage will be a set of event builders called Readout Builders. These will perform the building of full events. A single Readout Builder will build events from 72 sources of 16 KB fragments at a rate of 12.5 KHz. In this paper we present the design of a Readout Builder based on TCP/IP over Gigabit Ethernet and the optimization that was required to achieve the design throughput. This optimization includes architecture of the Readout Builder, the setup of TCP/IP, and hardware selection.

  5. ZEXP - expert system for ZEUS

    International Nuclear Information System (INIS)

    Behrens, U.; Flasinski, M.; Hagge, L.

    1992-10-01

    The proper and timely reactions to errors occurring in the online data-acquisition (DAQ) system are necessary conditions of smooth data taking during the experiment runs. Since the Eventbuilder (EVB) is a central part of the ZEUS DAQ system, it is the best place for monitoring, detecting, and recognizing erroneous behaviour. ZEXP is a software tool for upgrading the DAQ system performance. The pattern recognition methodology used for designing one of its two main modules is discussed. The general design ideas of the system and some preliminary results from the summarizing run module are presented, as well. (orig.)

  6. Controlling and Monitoring the Data Flow of the LHCb Read-out and DAQ Network

    CERN Multimedia

    Schwemmer, R; Neufeld, N; Svantesson, D

    2011-01-01

    The LHCb readout uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment's raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out chain t...

  7. Controlling and monitoring the data flow of the LHCb read-out and DAQ network

    International Nuclear Information System (INIS)

    Schwemmer, R.; Gaspar, C.; Neufeld, N.; Svantesson, D.

    2012-01-01

    The LHCb read-out uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment's raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out chain to count fragments, packets and their rates at different positions. To keep uniformity throughout the experiment, all control software was developed using the common SCADA software, PVSS, with the JCOP framework as base. The presentation will focus on the low level controls interface developed for the L1 boards and the networking probes, as well as the integration of the high level user interfaces into PVSS. (authors)

  8. Controlling and Monitoring the Data Flow of the LHCb Read-out and DAQ Network

    CERN Document Server

    Schwemmer, Rainer; Neufeld, N; Svantesson, D

    2011-01-01

    The LHCb read-out uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment’s raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out cha...

  9. DEAP-3600 Data Acquisition System

    Science.gov (United States)

    Lindner, Thomas

    2015-12-01

    DEAP-3600 is a dark matter experiment using liquid argon to detect Weakly Interacting Massive Particles (WIMPs). The DEAP-3600 Data Acquisition (DAQ) has been built using a combination of commercial and custom electronics, organized using the MIDAS framework. The DAQ system needs to suppress a high rate of background events from 39Ar beta decays. This suppression is implemented using a combination of online firmware and software-based event filtering. We will report on progress commissioning the DAQ system, as well as the development of the web-based user interface.

  10. Part 2 of the summary for the electronics, DAQ, and computing working group: Technological developments

    International Nuclear Information System (INIS)

    Slaughter, A.J.

    1993-01-01

    The attraction of hadron machines as B factories is the copious production of B particles. However, the interesting physics lies in specific rare final states. The challenge is selecting and recording the interesting ones. Part 1 of the summary for this working group, open-quote Comparison of Trigger and Data Acquisition Parameters for Future B Physics Experiments close-quote summarizes and compares the different proposals. In parallel with this activity, the working group also looked at a number of the technological developments being proposed to meet the trigger and DAQ requirements. The presentations covered a wide variety of topics, which are grouped into three categories: (1) front-end electronics, (2) level 0 fast triggers, and (3) trigger and vertex processors. The group did not discuss on-line farms or offine data storage and computing due to lack of time

  11. Development of an X-ray imaging system with SOI pixel detectors

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, Ryutaro, E-mail: ryunishi@post.kek.jp [School of High Energy Accelerator Science, SOKENDAI (The Graduate University for Advanced Studies), Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Arai, Yasuo; Miyoshi, Toshinobu [Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK-IPNS), Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Hirano, Keiichi; Kishimoto, Shunji; Hashimoto, Ryo [Institute of Materials Structure Science, High Energy Accelerator Research Organization (KEK-IMSS), Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-09-21

    An X-ray imaging system employing pixel sensors in silicon-on-insulator technology is currently under development. The system consists of an SOI pixel detector (INTPIX4) and a DAQ system based on a multi-purpose readout board (SEABAS2). To correct a bottleneck in the total throughput of the DAQ of the first prototype, parallel processing of the data taking and storing processes and a FIFO buffer were implemented for the new DAQ release. Due to these upgrades, the DAQ throughput was improved from 6 Hz (41 Mbps) to 90 Hz (613 Mbps). The first X-ray imaging system with the new DAQ software release was tested using 33.3 keV and 9.5 keV mono X-rays for three-dimensional computerized tomography. The results of these tests are presented. - Highlights: • The X-ray imaging system employing the SOI pixel sensor is currently under development. • The DAQ of the first prototype has the bottleneck in the total throughput. • The new DAQ release solve the bottleneck by parallel processing and FIFO buffer. • The new DAQ release was tested using 33.3 keV and 9.5 keV mono X-rays.

  12. Deployment and future prospects of high performance diagnostics featuring serial I/O (SIO) data acquisition (DAQ) at ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Behler, K., E-mail: karl.behler@ipp.mpg.de [Max-Planck-Institut fuer Plasmaphysik, Boltzmannstr. 2, D-85748 Garching bei Muenchen (Germany); Blank, H.; Eixenberger, H. [Max-Planck-Institut fuer Plasmaphysik, Boltzmannstr. 2, D-85748 Garching bei Muenchen (Germany); Fitzek, M. [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, D-82393 Iffeldorf (Germany); Lohs, A. [Max-Planck-Institut fuer Plasmaphysik, Boltzmannstr. 2, D-85748 Garching bei Muenchen (Germany); Lueddecke, K. [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, D-82393 Iffeldorf (Germany); Merkel, R. [Max-Planck-Institut fuer Plasmaphysik, Boltzmannstr. 2, D-85748 Garching bei Muenchen (Germany)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer The high sustained data rates transferring measured data from periphery into memory of computers. Black-Right-Pointing-Pointer The achieved low latency in real-time interrupt handling under Solaris 10. Black-Right-Pointing-Pointer The new prototype of an even more powerful 2nd generation SIO II device. Black-Right-Pointing-Pointer The fusion of all blocks of board logic (serializer, FIFO, TDC, merge engine, PCIe controller) into one single FPGA simplifying the boards physical layout significantly. - Abstract: The SIO DAQ concept used at the ASDEX Upgrade fusion experiment features data acquisition from a modular front-end (a modular crate-and-interface-cards concept for analog and digital input and output) over standardized serial lines and via a serial input/output computer interface card (the SIO card) in real-time directly into the main memory of a host computer. Deployment of a series of diagnostics using SIO led to various solutions and configurations for the different requirements. Experience has been gained and lessons learned applying the SIO concept at its technical limits. Requirements for a further development of the SIO concept have been identified, and a performance improvement by a factor of 4-8 beyond its current limits seems achievable. An effort has been started to develop a SIO version 2 (SIO II) featuring upgraded serial links and a more powerful FPGA for merging and forwarding data streams to host computer memory. (Compatibility with the existing SIO (SIO I) front-end system has to be maintained.) This paper presents results achieved and experiences gained in the deployment of SIO I, the status of SIO II development (currently in the prototype phase), and projected enhancements and updates to existing implementations.

  13. The Linux based distributed data acquisition system for the ISTRA+ experiment

    International Nuclear Information System (INIS)

    Filin, A.; Inyakin, A.; Novikov, V.; Obraztsov, V.; Smirnov, N.; Vlassov, E.; Yuschenko, O.

    2001-01-01

    The DAQ hardware of the ISTRA+ experiment consists of the VME system crate that contains two PCI-VME bridges interfacing two PCs with VME, external interrupts receiver, the readout controller for dedicated front-end electronics, the readout controller buffer memory module, the VME-CAMAC interface, and additional control modules. The DAQ computing consist of 6 PCs running the Linux operating system and linked into LAN. The first PC serves the external interrupts and acquires the data from front-end electronic. The second one is the slow control computer. The remaining PCs host the monitoring and data analysis software. The Linux based DAQ software provides the external interrupts processing, the data acquisition, recording, and distribution between monitoring and data analysis tasks running at DAQ PCs. The monitoring programs are based on two packages for data visualization: home-written one and the ROOT system. MySQL is used as a DAQ database

  14. Measuring Tools Design of Control Rods Drop Time at the RSG-GAS Based on Labview V8.5 and DAQ6009

    International Nuclear Information System (INIS)

    Heri Suherkiman; Sukino; Ranji Gusman

    2012-01-01

    The RSG-GAS reactor has 8 control rods that serve to control the rate of fission. Control rods are the most important technical safety systems and the last protective equipment to shut down the reactor in the event of abnormal incident. Testing of the control rods drop time is one way to ensure that the control rods can function in accordance with the requirements reactor operations. Existing test tools have limitations that can only measure one control rod at each measurement. Another problem is the difficulty of getting a replacement device with the same functionality in the market to replace existing tools if damaged Therefore, then we do design of control rods drop time based on Labview v8.5 and DAQ6009. The design has resulted design, components specification and programming that are expected to be applied to the manufacture of new control rods drop time measuring devices that have the same functionality as the previous tool with better facilities. (author)

  15. Novel Ethernet Based Optical Local Area Networks for Computer Interconnection

    NARCIS (Netherlands)

    Radovanovic, Igor; van Etten, Wim; Taniman, R.O.; Kleinkiskamp, Ronny

    2003-01-01

    In this paper we present new optical local area networks for fiber-to-the-desk application. Presented networks are expected to bring a solution for having optical fibers all the way to computers. To bring the overall implementation costs down we have based our networks on short-wavelength optical

  16. Exploiting spatial parallelism in Ethernet-based cluster interconnects

    DEFF Research Database (Denmark)

    Passas, Stavros; Kotsis, George; Karlsson, Sven

    2008-01-01

    In this work we examine the implications of building a single logical link out of multiple physical links. We use MultiEdge to examine the throughput-CPU utilization tradeoffs and examine how overheads and performance scale with the number and speed of links. We use low- level instrumentation...... the multiple link configurations, reaching 80% of nominal throughput, (c) The impact of copying on CPU overhead is significant, and removing copying results in up-to 66% improvement in maximum throughput, reaching almost 100% of the nominal throughput, (d) Scheduling packets over heterogeneous links requires...

  17. Using VME to leverage legacy CAMAC electronics into a high speed data acquisition system

    International Nuclear Information System (INIS)

    Anthony, P.L.

    1997-06-01

    The authors report on the first full scale implementation of a VME based Data Acquisition (DAQ) system at the Stanford Linear Accelerator Center (SLAC). This system was designed for use in the End Station A (ESA) fixed target program. It was designed to handle interrupts at rates up to 120 Hz and event sizes up to 10,000 bytes per interrupt. One of the driving considerations behind the design of this system was to make use of existing CAMAC based electronics and yet deliver a high performance DAQ system. This was achieved by basing the DAQ system in a VME backplane allowing parallel control and readout of CAMAC branches and VME DAQ modules. This system was successfully used in the Spin Physics research program at SLAC (E154 and E155)

  18. Design and performance of an acquisition and control system for a positron camera with novel detectors

    International Nuclear Information System (INIS)

    Symonds-Tayler, J.R.N.; Reader, A.J.; Flower, M.A.

    1996-01-01

    A Sun-based data acquisition and control (DAQ) system has been designed for PETRRA, a whole-body positron camera using large-area BaF 2 -TMAE detectors. The DAQ system uses a high-speed digital I/O card (S16D) installed on the S-bus of a SPARC10 and a specially-designed Positron Camera Interface (PCI), which also controls both the gantry and horizontal couch motion. Data in the form of different types of 6-byte packets are acquired in list mode. Tests with a signal generator show that the DAQ system should be able to cater for coincidence count-rates up to 100 kcps. The predicted count loss due to the DAQ system is ∼13% at this count rate, provided asynchronous-read based software is used. The list-mode data acquisition system designed for PETRRA could be adapted for other 3D PET cameras with similar data rates

  19. The simulation of a data acquisition system for a proposed high resolution PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Rotolo, C.; Larwill, M.; Chappa, S. [Fermi National Accelerator Lab., Batavia, IL (United States); Ordonez, C. [Chicago Univ., IL (United States)

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs.

  20. The simulation of a data acquisition system for a proposed high resolution PET scanner

    International Nuclear Information System (INIS)

    Rotolo, C.; Larwill, M.; Chappa, S.; Ordonez, C.

    1993-10-01

    The simulation of a specific data acquisition (DAQ) system architecture for a proposed high resolution Positron Emission Tomography (PET) scanner is discussed. Stochastic processes are used extensively to model PET scanner signal timing and probable DAQ circuit limitations. Certain architectural parameters, along with stochastic parameters, are varied to quantatively study the resulting output under various conditions. The inclusion of the DAQ in the model represents a novel method of more complete simulations of tomograph designs, and could prove to be of pivotal importance in the optimization of such designs

  1. Design of a large remote seismic exploration data acquisition system, with the architecture of a distributed storage area network

    International Nuclear Information System (INIS)

    Cao, Ping; Song, Ke-zhu; Yang, Jun-feng; Ruan, Fu-ming

    2011-01-01

    Nowadays, seismic exploration data acquisition (DAQ) systems have been developed into remote forms with a large-scale coverage area. In this kind of application, some features must be mentioned. Firstly, there are many sensors which are placed remotely. Secondly, the total data throughput is high. Thirdly, optical fibres are not suitable everywhere because of cost control, harsh running environments, etc. Fourthly, the ability of expansibility and upgrading is a must for this kind of application. It is a challenge to design this kind of remote DAQ (rDAQ). Data transmission, clock synchronization, data storage, etc must be considered carefully. A fourth-hierarchy model of rDAQ is proposed. In this model, rDAQ is divided into four different function levels. From this model, a simple and clear architecture based on a distributed storage area network is proposed. rDAQs with this architecture have advantages of flexible configuration, expansibility and stability. This architecture can be applied to design and realize from simple single cable systems to large-scale exploration DAQs

  2. Zero Suppression with Scalable Readout System (SRS) and APV25 FE Chip

    CERN Document Server

    Goentoro, Steven Lukas

    2015-01-01

    Zero suppression is a very useful algorithm in data acquisition and transfer. In this report, I would like to present the basic procedures of the application of Zero Suppression in the ordinary DAQ system that we have ( Date and Amore)

  3. The design of virtual double-parameter nuclear spectrum acquisition system based on LabVIEW

    International Nuclear Information System (INIS)

    Liu Songqiu; Chen Chuan; Lei Wuhu

    2001-01-01

    This paper introduces the design of virtual double-parameter nuclear spectrum acquisition system based on LabVIEW and NI multifunction DAQ board, and the use of it to measure the double-parameter nuclear spectrum

  4. Evaluation of a Modular PET System Architecture with Synchronization over Data Links

    OpenAIRE

    Aliaga Varea, Ramón José; Herrero Bosch, Vicente; Monzó Ferrer, José María; Ros García, Ana; Gadea Gironés, Rafael; Colom Palero, Ricardo José

    2014-01-01

    A DAQ architecture for a PET system is presented that focuses on modularity, scalability and reusability. The system defines two basic building blocks: data acquisitors and concentra- tors, which can be replicated in order to build a complete DAQ of variable size. Acquisition modules contain a scintillating crystal and either a position-sensitive photomultiplier (PSPMT) or an array of silicon photomultipliers (SiPM). The detector signals are processed by AMIC, an integrated analog front-end t...

  5. The team from ALICE DAQ (Data acquisition) involved in the 7th ALICE data challenge. First row: Sylvain Chapeland, Ulrich Fuchs, Pierre Vande Vyvre, Franco Carena Second row: Wisla Carena, Irina MAKHLYUEVA , Roberto Divia

    CERN Multimedia

    Claudia Marcelloni

    2007-01-01

    The team from ALICE DAQ (Data acquisition) involved in the 7th ALICE data challenge. First row: Sylvain Chapeland, Ulrich Fuchs, Pierre Vande Vyvre, Franco Carena Second row: Wisla Carena, Irina MAKHLYUEVA , Roberto Divia

  6. Modelling of data acquisition systems

    International Nuclear Information System (INIS)

    Buono, S.; Gaponenko, I.; Jones, R.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Spiwoks, R.; Skiadelli, M.; Ambrosini, G.

    1994-01-01

    The RD13 project was approved in April 1991 for the development of a scalable data taking system suitable to host various LHC studies. One of its goals is to use simulations as a tool for understanding, evaluating, and constructing different configurations of such data acquisition (DAQ) systems. The RD13 project has developed a modelling framework for this purpose. It is based on MODSIM II, an object-oriented, discrete-event simulation language. A library of DAQ components allows to describe a variety of DAQ architectures and different hardware options in a modular and scalable way. A graphical user interface (GUI) is used to do easy configuration, initialization and on-line monitoring of the simulation program. A tracing facility is used to do flexible off-line analysis of a trace file written at run-time

  7. DAQ Software Contributions, Absolute Scale Energy Calibration and Background Evaluation for the NOvA Experiment at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Flumerfelt, Eric Lewis [Univ. of Tennessee, Knoxville, TN (United States)

    2015-08-01

    The NOvA (NuMI Off-axis ve [nu_e] Appearance) Experiment is a long-baseline accelerator neutrino experiment currently in its second year of operations. NOvA uses the Neutrinos from the Main Injector (NuMI) beam at Fermilab, and there are two main off-axis detectors: a Near Detector at Fermilab and a Far Detector 810 km away at Ash River, MN. The work reported herein is in support of the NOvA Experiment, through contributions to the development of data acquisition software, providing an accurate, absolute-scale energy calibration for electromagnetic showers in NOvA detector elements, crucial to the primary electron neutrino search, and through an initial evaluation of the cosmic background rate in the NOvA Far Detector, which is situated on the surface without significant overburden. Additional support work for the NOvA Experiment is also detailed, including DAQ Server Administration duties and a study of NOvA’s sensitivity to neutrino oscillations into a “sterile” state.

  8. The psychometric characteristics of the revised depression attitude questionnaire (R-DAQ) in Pakistani medical practitioners: a cross-sectional study of doctors in Lahore.

    Science.gov (United States)

    Haddad, Mark; Waqas, Ahmed; Sukhera, Ahmed Bashir; Tarar, Asad Zaman

    2017-07-27

    Depression is common mental health problem and leading contributor to the global burden of disease. The attitudes and beliefs of the public and of health professionals influence social acceptance and affect the esteem and help-seeking of people experiencing mental health problems. The attitudes of clinicians are particularly relevant to their role in accurately recognising and providing appropriate support and management of depression. This study examines the characteristics of the revised depression attitude questionnaire (R-DAQ) with doctors working in healthcare settings in Lahore, Pakistan. A cross-sectional survey was conducted in 2015 using the revised depression attitude questionnaire (R-DAQ). A convenience sample of 700 medical practitioners based in six hospitals in Lahore was approached to participate in the survey. The R-DAQ structure was examined using Parallel Analysis from polychoric correlations. Unweighted least squares analysis (ULSA) was used for factor extraction. Model fit was estimated using goodness-of-fit indices and the root mean square of standardized residuals (RMSR), and internal consistency reliability for the overall scale and subscales was assessed using reliability estimates based on Mislevy and Bock (BILOG 3 Item analysis and test scoring with binary logistic models. Mooresville: Scientific Software, 55) and the McDonald's Omega statistic. Findings using this approach were compared with principal axis factor analysis based on Pearson correlation matrix. 601 (86%) of the doctors approached consented to participate in the study. Exploratory factor analysis of R-DAQ scale responses demonstrated the same 3-factor structure as in the UK development study, though analyses indicated removal of 7 of the 22 items because of weak loading or poor model fit. The 3 factor solution accounted for 49.8% of the common variance. Scale reliability and internal consistency were adequate: total scale standardised alpha was 0.694; subscale reliability for

  9. The ngdp framework for data acquisition systems

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2010-01-01

    The ngdp framework is intended to provide a base for the data acquisition (DAQ) system software. The ngdp's design key features are: high modularity and scalability; usage of the kernel context (particularly kernel threads) of the operating systems (OS), which allows one to avoid preemptive scheduling and unnecessary memory-to-memory copying between contexts; elimination of intermediate data storages on the media slower than the operating memory like hard disks, etc. The ngdp, having the above properties, is suitable to organize and manage data transportation and processing for needs of essentially distributed DAQ systems

  10. Data acquisition system issues for large experiments

    International Nuclear Information System (INIS)

    Siskind, E.J.

    2007-01-01

    This talk consists of personal observations on two classes of data acquisition ('DAQ') systems for Silicon trackers in large experiments with which the author has been concerned over the last three or more years. The first half is a classic 'lessons learned' recital based on experience with the high-level debug and configuration of the DAQ system for the GLAST LAT detector. The second half is concerned with a discussion of the promises and pitfalls of using modern (and future) generations of 'system-on-a-chip' ('SOC') or 'platform' field-programmable gate arrays ('FPGAs') in future large DAQ systems. The DAQ system pipeline for the 864k channels of Si tracker in the GLAST LAT consists of five tiers of hardware buffers which ultimately feed into the main memory of the (two-active-node) level-3 trigger processor farm. The data formats and buffer volumes of these tiers are briefly described, as well as the flow control employed between successive tiers. Lessons learned regarding data formats, buffer volumes, and flow control/data discard policy are discussed. The continued development of platform FPGAs containing large amounts of configurable logic fabric, embedded PowerPC hard processor cores, digital signal processing components, large volumes of on-chip buffer memory, and multi-gigabit serial I/O capability permits DAQ system designers to vastly increase the amount of data preprocessing that can be performed in parallel within the DAQ pipeline for detector systems in large experiments. The capabilities of some currently available FPGA families are reviewed, along with the prospects for next-generation families of announced, but not yet available, platform FPGAs. Some experience with an actual implementation is presented, and reconciliation between advertised and achievable specifications is attempted. The prospects for applying these components to space-borne Si tracker detectors are briefly discussed

  11. A Compton suppressed detector multiplicity trigger based digital DAQ for gamma-ray spectroscopy

    Science.gov (United States)

    Das, S.; Samanta, S.; Banik, R.; Bhattacharjee, R.; Basu, K.; Raut, R.; Ghugre, S. S.; Sinha, A. K.; Bhattacharya, S.; Imran, S.; Mukherjee, G.; Bhattacharyya, S.; Goswami, A.; Palit, R.; Tan, H.

    2018-06-01

    The development of a digitizer based pulse processing and data acquisition system for γ-ray spectroscopy with large detector arrays is presented. The system is based on 250 MHz 12-bit digitizers, and is triggered by a user chosen multiplicity of Compton suppressed detectors. The logic for trigger generation is similar to the one practised for analog (NIM/CAMAC) pulse processing electronics, while retaining the fast processing merits of the digitizer system. Codes for reduction of data acquired from the system have also been developed. The system has been tested with offline studies using radioactive sources as well as in the in-beam experiments with an array of Compton suppressed Clover detectors. The results obtained therefrom validate its use in spectroscopic efforts for nuclear structure investigations.

  12. DaqProVis, a toolkit for acquisition, interactive analysis, processing and visualization of multidimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Morhac, M. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)]. E-mail: fyzimiro@savba.sk; Matousek, V. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia); Turzo, I. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia); Kliman, J. [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)

    2006-04-01

    Multidimensional data acquisition, processing and visualization system to analyze experimental data in nuclear physics is described. It includes a large number of sophisticated algorithms of the multidimensional spectra processing, including background elimination, deconvolution, peak searching and fitting.

  13. A high-speed DAQ framework for future high-level trigger and event building clusters

    International Nuclear Information System (INIS)

    Caselle, M.; Perez, L.E. Ardila; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.

    2017-01-01

    Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using 'DirectGMA (AMD)' and 'GPUDirect (NVIDIA)' technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.

  14. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  15. Data acquisition system for LHCb calorimeter

    International Nuclear Information System (INIS)

    Dai Gang; Gong Guanghua; Shao Beibei

    2007-01-01

    LHCb Calorimeter system is mainly used to identify and measure the energy of the photon, electron, hadron produced by the collision of proton. TELL1 is a common data acquisition platform based on FPGA for LHCb experiment. It is used to adopt custom data acquisition and process method for every detector and provide the data standard for the CPU matrix. This paper provides a novel DAQ and data process model in VHDL for Calorimeter. According to this model. We have built an effective Calorimeter DAQ system, which would be used in LHCb Experiment. (authors)

  16. Data Acquisition with GPUs: The DAQ for the Muon $g$-$2$ Experiment at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Gohn, W. [Kentucky U.

    2016-11-15

    Graphical Processing Units (GPUs) have recently become a valuable computing tool for the acquisition of data at high rates and for a relatively low cost. The devices work by parallelizing the code into thousands of threads, each executing a simple process, such as identifying pulses from a waveform digitizer. The CUDA programming library can be used to effectively write code to parallelize such tasks on Nvidia GPUs, providing a significant upgrade in performance over CPU based acquisition systems. The muon $g$-$2$ experiment at Fermilab is heavily relying on GPUs to process its data. The data acquisition system for this experiment must have the ability to create deadtime-free records from 700 $\\mu$s muon spills at a raw data rate 18 GB per second. Data will be collected using 1296 channels of $\\mu$TCA-based 800 MSPS, 12 bit waveform digitizers and processed in a layered array of networked commodity processors with 24 GPUs working in parallel to perform a fast recording of the muon decays during the spill. The described data acquisition system is currently being constructed, and will be fully operational before the start of the experiment in 2017.

  17. A triggerless digital data acquisition system for nuclear decay experiments

    Energy Technology Data Exchange (ETDEWEB)

    Agramunt, J.; Tain, J. L.; Albiol, F.; Algora, A.; Estevez, E.; Giubrone, G.; Jordan, M. D.; Molina, F.; Rubio, B.; Valencia, E. [Instituto de Fisica Corpuscular, Centro Mixto C.S.I.C. - Univ. Valencia, Apdo. Correos 22085, 46071 Valencia (Spain)

    2013-06-10

    In nuclear decay experiments an important goal of the Data Acquisition (DAQ) system is to allow the reconstruction of time correlations between signals registered in different detectors. Classically DAQ systems are based in a trigger that starts the event acquisition, and all data related with the event of that trigger are collected as one compact structure. New technologies and electronics developments offer new possibilities to nuclear experiments with the use of sampling ADC-s. This type of ADC-s is able to provide the pulse shape, height and a time stamp of the signal. This new feature (time stamp) allows new systems to run without an event trigger. Later, the event can be reconstructed using the time stamp information. In this work we present a new DAQ developed for {beta}-delayed neutron emission experiments. Due to the long moderation time of neutrons, we opted for a self-trigger DAQ based on commercial digitizers. With this DAQ a negligible acquisition dead time was achieved while keeping a maximum of event information and flexibility in time correlations.

  18. Central control system for the EAST tokamak

    International Nuclear Information System (INIS)

    Sun Xiaoyang; Ji Zhenshan; Wu Yicun; Luo Jiarong

    2008-01-01

    The architecture, the main function and the design scheme of the central control system and the collaboration system of EAST tokamak are described. The main functions of the central control system are to supply a union control interface for all the control, diagnoses, and data acquisition (DAQ) subsystem and it is also designed to synchronize all those subsystem. (authors)

  19. Development of Baby-EBM Interface System

    International Nuclear Information System (INIS)

    Mukhlis Mokhtar; Abu Bakar Ghazali; Muhammad Zahidee Taat

    2010-01-01

    This paper explains the works being done to develop an interface system for Baby-Electron Beam Machine (EBM). The function of the system is for the safety, controlling and monitoring the Baby-EBM. The integration for the system is using data acquisition (DAQ) hardware and LabVIEW to develop the software. (author)

  20. Development of Baby-EBM Interface System

    Energy Technology Data Exchange (ETDEWEB)

    Mokhtar, Mukhlis; Ghazali, Abu Bakar; Taat, Muhammad Zahidee [Accelerator Development Center, Malaysian Nuclear Agency, Bangi, Kajang, Selangor (Malaysia), Technical Support Div.

    2010-07-01

    This paper explains the works being done to develop an interface system for Baby-Electron Beam Machine (EBM). The function of the system is for the safety, controlling and monitoring the Baby-EBM. The integration for the system is using data acquisition (DAQ) hardware and LabVIEW to develop the software. (author)

  1. A new data acquisition system for pelletron-LINAC experiments

    International Nuclear Information System (INIS)

    Ramachandran, K.; Chatterjee, A.; Singh, Sudheer; Jha, K.; Joy, Saju; Behere, A.; Goadgoankar, M.D.

    2007-01-01

    The LINAC booster facility coupled with Pelletron accelerator at Mumbai and the plans to have large detector arrays such as Indian National Gamma Array, Charged Particle Array, Neutron Array, BaF 2 etc. pose new challenges to have a Data Acquisition system (DAQ) with a throughput an order of magnitude higher than the present CAMAC system. The major limitation of CAMAC readout is the 1μs/word readout time. A new FERA (Fast Encoding and Readout) data acquisition system developed at BARC for the augmentation of the throughput of CAMAC is a readout bus for the CAMAC ADCs. With this FERA DAQ, it is possible to readout CAMAC ADC's at 150 ns/word. This talk will present the new DAQ system used at BARC-TIFR Pelletron Accelerator facility. (author)

  2. Modular Measuring System for Assesment of the Thyroid Gland Functional State

    Directory of Open Access Journals (Sweden)

    Vladimir Rosik

    2005-01-01

    Full Text Available Distributed modular system BioLab for biophysical examinations enabling assessment of the thyroid gland functional state is presented in the paper. The BioLab system is based on a standard notebook or desktop PC connected to an Ethernet-based network of two smart sensors. These sensors are programmed and controlled from PC and enable measurement of selected biosignals of the human cardiovascular and neuromuscular system that are influenced by the production of thyroid gland hormones. Recorded biosignals are processed in a PC and peripheral indicators characterizing thyroid gland functional state are evaluated.

  3. Control and monitoring system design study for the UNK experimental setups

    International Nuclear Information System (INIS)

    Ekimov, A.; Ermolin, Yu.; Matveev, M.; Ovcharov, S.; Petrov, V.; Vaniev, V.

    1992-01-01

    At present a number of experimental setups for the new UNK project are under construction. A common approach to the architecture of control/DAQ/trigger systems will be used in the development of electronics for all these detectors. The system analysis and design group has been formed for this purpose. The group activity is aimed at the development of such unified system. The group has started with control and monitoring system as one of the most important parts and the environment for the DAQ/trigger systems. The group activity status report is presented. (author)

  4. Acquisition System and Detector Interface for Power Pulsed Detectors

    CERN Document Server

    Cornat, R

    2012-01-01

    A common DAQ system is being developed within the CALICE collaboration. It provides a flexible and scalable architecture based on giga-ethernet and 8b/10b serial links in order to transmit either slow control data, fast signals or read out data. A detector interface (DIF) is used to connect detectors to the DAQ system based on a single firmware shared among the collaboration but targeted on various physical implementations. The DIF allows to build, store and queue packets of data as well as to control the detectors providing USB and serial link connectivity. The overall architecture is foreseen to manage several hundreds of thousands channels.

  5. A Data Acquistion System for CALICE AHCAL calorimeter

    CERN Document Server

    Kvasnicka, J. (on behalf of the CALICE collaboration)

    2017-01-01

    The data acquisition system (DAQ) for a highly granular analogue hadron calorimeter (AHCAL) for the future International Linear Collider is presented. The developed DAQ chain has several stages of aggregation and scales up to 8 million channels foreseen for the AHCAL detector design. The largest aggregation device, Link Data Aggregator, has 96 HDMI connectors, four Kintex7 FPGAs and a central Zynq System-On-Chip. Architecture and performance results are shown in detail. Experience from DESY testbeams with a small detector prototype consisting of 15 detector layers are shown.

  6. Design of a VME bus controller based on ARM7 with embedded system

    International Nuclear Information System (INIS)

    Wang Yanyu; Qiao Weimin; Guo Yuhui; Li Xiaoqiang; Chinese Academy of Sciences, Beijing

    2005-01-01

    This paper introduces the solution of a VME-like bus Controller briefly. Samsung's S3C4510B 16/32-bit RISC microcontroller with ARM7 core is a high-performance microcontroller support Ethernet-based systems. The authors use S3C4510B, VIC-068A and CPLD devices built a high speed data route from VME-like Bus to network, the fast link between background computer data-base and front-end target module is achieved over network. (authors)

  7. The operational performance of the ATLAS trigger and data acquisition system and its possible evolution

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The first part of this presentation will give an overview of the operational performance of the DAQ system during 2011 and the first months of data taking in 2012. It will describe how the flexibility inherent in the design of the system has be exploited to meet the changing needs of ATLAS data taking and in some cases push performance beyond the original design performance specification. The experience accumulated in the ATLAS DAQ/HLT system operation during these years stimulated also interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the se...

  8. Data acquisition system for high resolution chopper spectrometer (HRC) at J-PARC

    International Nuclear Information System (INIS)

    Yano, Shin-ichiro; Itoh, Shinichi; Satoh, Setsuo; Yokoo, Tetsuya; Kawana, Daichi; Sato, Taku J.

    2011-01-01

    We installed the data acquisition (DAQ) system on the High Resolution Chopper Spectrometer (HRC) at beamline BL12 at the Materials and Life Science Experimental Facility (MLF) of the Japan Proton Accelerator Research Complex (J-PARC). In inelastic neutron scattering experiments with the HRC, the event data of the detected neutrons are processed in the DAQ system and visualized in the form of the dynamic structure factor. We confirmed that the data analysis process works well by visualizing excitations in single-crystal magnetic systems probed by inelastic neutron scattering.

  9. Asymmetric Data Acquisition System for an Endoscopic PET-US Detector

    Science.gov (United States)

    Zorraquino, Carlos; Bugalho, Ricardo; Rolo, Manuel; Silva, Jose C.; Vecklans, Viesturs; Silva, Rui; Ortigão, Catarina; Neves, Jorge A.; Tavernier, Stefaan; Guerra, Pedro; Santos, Andres; Varela, João

    2016-02-01

    According to current prognosis studies of pancreatic cancer, survival rate nowadays is still as low as 6% mainly due to late detections. Taking into account the location of the disease within the body and making use of the level of miniaturization in radiation detectors that can be achieved at the present time, EndoTOFPET-US collaboration aims at the development of a multimodal imaging technique for endoscopic pancreas exams that combines the benefits of high resolution metabolic information from time-of- flight (TOF) positron emission tomography (PET) with anatomical information from ultrasound (US). A system with such capabilities calls for an application-specific high-performance data acquisition system (DAQ) able to control and readout data from different detectors. The system is composed of two novel detectors: a PET head extension for a commercial US endoscope placed internally close to the region-of-interest (ROI) and a PET plate placed over the patient's abdomen in coincidence with the PET head. These two detectors will send asymmetric data streams that need to be handled by the DAQ system. The approach chosen to cope with these needs goes through the implementation of a DAQ capable of performing multi-level triggering and which is distributed across two different on-detector electronics and the off-detector electronics placed inside the reconstruction workstation. This manuscript provides an overview on the design of this innovative DAQ system and, based on results obtained by means of final prototypes of the two detectors and DAQ, we conclude that a distributed multi-level triggering DAQ system is suitable for endoscopic PET detectors and it shows potential for its application in different scenarios with asymmetric sources of data.

  10. Efficient network monitoring for large data acquisition systems

    International Nuclear Information System (INIS)

    Savu, D.O.; Martin, B.; Al-Shabibi, A.; Sjoen, R.; Batraneanu, S.M.; Stancu, S.N.

    2012-01-01

    Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed realtime data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis. (authors)

  11. A Measurement and Power Line Communication System Design for Renewable Smart Grids

    Science.gov (United States)

    Kabalci, E.; Kabalci, Y.

    2013-10-01

    The data communication over the electric power lines can be managed easily and economically since the grid connections are already spread around all over the world. This paper investigates the applicability of Power Line Communication (PLC) in an energy generation system that is based on photovoltaic (PV) panels with the modeling study in Matlab/Simulink. The Simulink model covers the designed PV panels, boost converter with Perturb and Observe (P&O) control algorithm, full bridge inverter, and the binary phase shift keying (BPSK) modem that is utilized to transfer the measured data over the power lines. This study proposes a novel method to use the electrical power lines not only for carrying the line voltage but also to transmit the measurements of the renewable energy generation plants. Hence, it is aimed at minimizing the additional monitoring costs such as SCADA, Ethernet-based or GSM based systems by using the proposed technique. Although this study is performed with solar power plants, the proposed model can be applied to other renewable generation systems. Consequently, the usage of the proposed technique instead of SCADA or Ethernet-based systems eliminates additional monitoring costs.

  12. The ALICE data acquisition system

    CERN Document Server

    Carena, F; Chapeland, S; Chibante Barroso, V; Costa, F; Dénes, E; Divià, R; Fuchs, U; Grigore, A; Kiss, T; Simonetti, G; Soós, C; Telesca, A; Vande Vyvre, P; Von Haller, B

    2014-01-01

    In this paper we describe the design, the construction, the commissioning and the operation of the Data Acquisition (DAQ) and Experiment Control Systems (ECS) of the ALICE experiment at the CERN Large Hadron Collider (LHC). The DAQ and the ECS are the systems used respectively for the acquisition of all physics data and for the overall control of the experiment. They are two computing systems made of hundreds of PCs and data storage units interconnected via two networks. The collection of experimental data from the detectors is performed by several hundreds of high-speed optical links. We describe in detail the design considerations for these systems handling the extreme data throughput resulting from central lead ions collisions at LHC energy. The implementation of the resulting requirements into hardware (custom optical links and commercial computing equipment), infrastructure (racks, cooling, power distribution, control room), and software led to many innovative solutions which are described together with ...

  13. Control system for BARC-TIFR Pelletron

    International Nuclear Information System (INIS)

    Singh, S.; Singh, P.; Gore, J.; Kulkarni, S.

    2012-01-01

    BARC-TIFR Pelletron is a 14 MV tandem accelerator in operation from more than 20 years. It was having a DOS based control system software which was running on a 486 PC and it was not possible to port it on new PCs. It was based on serial highway and Uport adapter based CAMAC crate controller which are now not available and all spares were used. Hence we have changed CAMAC controller with in house developed Ethernet based CAMAC controller and new software has been developed. New Control system software is based on LINUX operating system with graphical user interface developed using Trolltech's QT API, but can be easily ported on MS windows. (author)

  14. Event Recording Data Acquisition System and Experiment Data Management System for Neutron Experiments at MLF, J-PARC

    Science.gov (United States)

    Nakatani, T.; Inamura, Y.; Moriyama, K.; Ito, T.; Muto, S.; Otomo, T.

    Neutron scattering can be a powerful probe in the investigation of many phenomena in the materials and life sciences. The Materials and Life Science Experimental Facility (MLF) at the Japan Proton Accelerator Research Complex (J-PARC) is a leading center of experimental neutron science and boasts one of the most intense pulsed neutron sources in the world. The MLF currently has 18 experimental instruments in operation that support a wide variety of users from across a range of research fields. The instruments include optical elements, sample environment apparatus and detector systems that are controlled and monitored electronically throughout an experiment. Signals from these components and those from the neutron source are converted into a digital format by the data acquisition (DAQ) electronics and recorded as time-tagged event data in the DAQ computers using "DAQ-Middleware". Operating in event mode, the DAQ system produces extremely large data files (˜GB) under various measurement conditions. Simultaneously, the measurement meta-data indicating each measurement condition is recorded in XML format by the MLF control software framework "IROHA". These measurement event data and meta-data are collected in the MLF common storage and cataloged by the MLF Experimental Database (MLF EXP-DB) based on a commercial XML database. The system provides a web interface for users to manage and remotely analyze experimental data.

  15. Design of the data acquisition system for the nuclear physics experiments at VECC

    International Nuclear Information System (INIS)

    Dhara, P.; Roy, A.; Maity, P.; Singhai, P.; Roy, P.S.

    2012-01-01

    The beam from K130 room temperature cyclotron is being extensively used for nuclear physics experiments for last three decades. The typical beam energy for the experiments is approximately 7-10 MeV/nucleon for heavy ions and 8-20 MeV/nucleon for light ions. The number of detectors used, may vary from one channel to few hundreds of detector channels. The proposed detector system for experiments with the superconducting cyclotron may have more than 1200 detector channels, and may be generating more than one million parameters per second. The VME (Versa Module Europa) and CAMAC (Computer Automated Measurement and Control) based data acquisition system (DAQ) is being used to cater the experimental needs. The current system has been designed based on various commercially available modules in NIM (Nuclear Instrumentation Module), CAMAC and VME form factor. This type of setup becomes very complicated to maintain for large number of detectors. Alternatively, the distributed DAQ system based on embedded technology is proposed. The traditional analog processing may be replaced by digital filters based FPGA (Field Programmable Gate Array) boards. This paper describes the design of current DAQ system and the status of the proposed scheme for distributed DAQ system with capability of handling heterogeneous detector systems. (author)

  16. Time-stamping system for nuclear physics experiments at RIKEN RIBF

    International Nuclear Information System (INIS)

    Baba, H.; Ichihara, T.; Ohnishi, T.; Takeuchi, S.; Yoshida, K.; Watanabe, Y.; Ota, S.; Shimoura, S.; Yoshinaga, K.

    2015-01-01

    A time-stamping system for nuclear physics experiments has been introduced at the RIKEN Radioactive Isotope Beam Factory. Individual trigger signals can be applied for separate data acquisition (DAQ) systems. After the measurements are complete, separately taken data are merged based on the time-stamp information. In a typical experiment, coincidence trigger signals are formed from multiple detectors to take desired events only. The time-stamping system allows the use of minimum bias triggers. Since coincidence conditions are given by software, a variety of physics events can be flexibly identified. The live time for a DAQ system is important when attempting to determine reaction cross-sections. However, the combined live time for separate DAQ systems is not clearly known because it depends not only on the DAQ dead time but also on the coincidence conditions. Using the proposed time-stamping system, all trigger timings can be acquired, so that the combined live time can be easily determined. The combined live time is also estimated using Monte Carlo simulations, and the results are compared with the directly measured values in order to assess the accuracy of the simulation

  17. The ALICE data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Dénes, E. [Research Institute for Particle and Nuclear Physics, Wigner Research Center, Budapest (Hungary); Divià, R.; Fuchs, U. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Grigore, A. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Politehnica Univesity of Bucharest, Bucharest (Romania); Kiss, T. [Cerntech Ltd., Budapest (Hungary); Simonetti, G. [Dipartimento Interateneo di Fisica ‘M. Merlin’, Bari (Italy); Soós, C.; Telesca, A.; Vande Vyvre, P. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Haller, B. von, E-mail: bvonhall@cern.ch [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland)

    2014-03-21

    In this paper we describe the design, the construction, the commissioning and the operation of the Data Acquisition (DAQ) and Experiment Control Systems (ECS) of the ALICE experiment at the CERN Large Hadron Collider (LHC). The DAQ and the ECS are the systems used respectively for the acquisition of all physics data and for the overall control of the experiment. They are two computing systems made of hundreds of PCs and data storage units interconnected via two networks. The collection of experimental data from the detectors is performed by several hundreds of high-speed optical links. We describe in detail the design considerations for these systems handling the extreme data throughput resulting from central lead ions collisions at LHC energy. The implementation of the resulting requirements into hardware (custom optical links and commercial computing equipment), infrastructure (racks, cooling, power distribution, control room), and software led to many innovative solutions which are described together with a presentation of all the major components of the systems, as currently realized. We also report on the performance achieved during the first period of data taking (from 2009 to 2013) often exceeding those specified in the DAQ Technical Design Report.

  18. Using a Control System Ethernet Network as a Field Bus

    CERN Document Server

    De Van, William R; Lawson, Gregory S; Wagner, William H; Wantland, David M; Williams, Ernest

    2005-01-01

    A major component of a typical accelerator distributed control system (DCS) is a dedicated, large-scale local area communications network (LAN). The SNS EPICS-based control system uses a LAN based on the popular IEEE-802.3 set of standards (Ethernet). Since the control system network infrastructure is available throughout the facility, and since Ethernet-based controllers are readily available, it is tempting to use the control system LAN for "fieldbus" communications to low-level control devices (e.g. vacuum controllers; remote I/O). These devices may or may not be compatible with the high-level DCS protocols. This paper presents some of the benefits and risks of combining high-level DCS communications with low-level "field bus" communications on the same network, and describes measures taken at SNS to promote compatibility between devices connected to the control system network.

  19. Remote device control and monitor system for the LHD deuterium experiments

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Hideya, E-mail: nakanisi@nifs.ac.jp [National Institute for Fusion Science (NIFS), Toki, Gifu 509-5292 (Japan); Dept. Fusion Science, SOKENDAI (The Graduate University for Advanced Studies), Toki, Gifu 509-5292 (Japan); Ohsuna, Masaki; Ito, Tatsuki; Nonomura, Miki; Imazu, Setsuo; Emoto, Masahiko; Iwata, Chie; Yoshida, Masanobu; Yokota, Mitsuhiro; Maeno, Hiroya; Aoyagi, Miwa; Ogawa, Hideki; Nakamura, Osamu; Morita, Yoshitaka; Inoue, Tomoyuki; Watanabe, Kiyomasa [National Institute for Fusion Science (NIFS), Toki, Gifu 509-5292 (Japan); Ida, Katsumi; Ishiguro, Seiji; Kaneko, Osamu [National Institute for Fusion Science (NIFS), Toki, Gifu 509-5292 (Japan); Dept. Fusion Science, SOKENDAI (The Graduate University for Advanced Studies), Toki, Gifu 509-5292 (Japan)

    2016-11-15

    Highlights: • Device remote control will be significant for the LHD deuterium experiments. • A central management GUI to control the power distribution for devices. • For safety, power management is separated from operational commanding. • Wi-Fi was tested and found to be not reliable with fusion plasmas. - Abstract: Upon beginning the LHD deuterium experiment, the opportunity for maintenance work in the torus hall will be conspicuously reduced such that all instruments must be controlled remotely. The LHD data acquisition (DAQ) and archiving system have been using about 110 DAQ front-end, and the DAQ central control and monitor system has been implemented for their remote management. This system is based on the “multi-agent” model whose communication protocol has been unified. Since DAQ front-end electronics would suffer from the “single-event effect” (SEE) of D-D neutrons, software-based remote operation might become ineffective, and then securely intercepting or recycling the electrical power of the device would be indispensable for recovering from a non-responding fault condition. In this study, a centralized control and monitor system has been developed for a number of power distribution units (PDUs). This system adopts the plug-in structure in which the plug-in modules can absorb the differences among the commercial products of numerous vendors. The combination of the above-mentioned functionalities has led to realizing the flexible and highly reliable remote control infrastructure for the plasma diagnostics and the device management in LHD.

  20. The implementation of the Star Data Acquisition System using a Myrinet Network

    International Nuclear Information System (INIS)

    Landgraf, J.M.; Adler, C.; Levine, M.J.; Ljubicic, A. JR.

    2000-01-01

    We will present results from the first year of operation of the STAR DAQ system using a Myrinet Network. STAR is one of four experiments to have been commissioned at the Relativistic Heavy Ion Collider (RHIC) at BNL during 1999 and 2000. The DAQ system is fully integrated with a Level 3 Trigger. The combined system currently consists of 33 Myrinet Nodes which run in a mixed environment of MVME processors running VxWorks, DEC Alpha workstations running Linux, and SUN Solaris machines. The network will eventually contain up to 150 nodes for the expected final size of the L3 processor farm. Myrinet is a switched, high speed, low latency network produced by Myricom and available for PCI and PMC on a wide variety of platforms. The STAR DAQ system uses the Myrinet network for messaging, L3 processing, and event building. After the events are built, they are sent via Gigabit Ethernet to the RHIC computing facility and stored to tape using HPSS. The combined DAQ/L3 system processes 160 MB events at 100 Hz, compresses each event to ∼20 MB, and performs tracking on the events to implement a physics-based filter to reduce the data storage rate to 20 MB/sec

  1. 78 FR 50079 - Information Collection Activities: Safety and Environmental Management Systems (SEMS); Proposed...

    Science.gov (United States)

    2013-08-16

    ... DEPARTMENT OF THE INTERIOR Bureau of Safety and Environmental Enforcement [Docket ID BSEE-2013-0005; OMB Control Number 1014-0017: 134E1700D2 EEEE500000 ET1SF0000.DAQ000] Information Collection Activities: Safety and Environmental Management Systems (SEMS); Proposed Collection; Comment Request...

  2. An updated data acquisition and analysis system at RIBLL

    International Nuclear Information System (INIS)

    Chen, Z.Q.; Ye, Y.L.; Zhan, W.L.; Xiao, G.Q.; Guo, Z.Y.; Xu, H.S.; Wang, J.C.; Jiang, D.X.; Wang, Q.J.; Zheng, T.; Zhang, G.L.; Wu, C.E.; Li, Z.H.; Li, X.Q.; Hu, Q.Y.; Pang, D.Y.; Wang, J.

    2005-01-01

    An updated data acquisition and analysis system for beam tuning and nuclear physics experiments at RIBLL is presented. The system hardware is based on standard CAMAC bus with SCSI KSC3929-Z1B crate controller. The system software has a user-friendly GUI which is written in C/C++ language using Microsoft Visual C++ .Net 2003 with ROOT class library and runs under PC-based Windows 2000 operating system. The performance of the DAQ system is reliable and safe

  3. New Ethernet Based Optically Transparent Network for Fiber-to-the-Desk Application

    NARCIS (Netherlands)

    Radovanovic, Igor; van Etten, Wim

    2003-01-01

    We present a new optical local area network architecture based on multimode optical fibers and components, short wavelength lasers and detectors and the widely used fast Ethernet protocol. The presented optically transparent network represent a novel approach in fiber-to-the-desk applications. It is

  4. A faster and more reliable data acquisition system for the full performance of the SciCRT

    International Nuclear Information System (INIS)

    Sasai, Y.; Matsubara, Y.; Itow, Y.; Sako, T.; Kawabata, T.; Lopez, D.; Hikimochi, R.; Tsuchiya, A.; Ikeno, M.; Uchida, T.; Tanaka, M.; Munakata, K.; Kato, C.; Nakamura, Y.; Oshima, T.; Koike, T.; Kozai, M.; Shibata, S.; Oshima, A.; Takamaru, H.

    2017-01-01

    The SciBar Cosmic Ray Telescope (SciCRT) is a massive scintillator tracker to observe cosmic rays at a very high-altitude environment in Mexico. The fully active tracker is based on the Scintillator Bar (SciBar) detector developed as a near detector for the KEK-to-Kamioka long-baseline neutrino oscillation experiment (K2K) in Japan. Since the data acquisition (DAQ) system was developed for the accelerator experiment, we determined to develop a new robust DAQ system to optimize it to our cosmic-ray experiment needs at the top of Mt. Sierra Negra (4600 m). One of our special requirements is to achieve a 10 times faster readout rate. We started to develop a new fast readout back-end board (BEB) based on 100 Mbps SiTCP, a hardware network processor developed for DAQ systems for high energy physics experiments. Then we developed the new BEB which has a potential of 20 times faster than the current one in the case of observing neutrons. Finally we installed the new DAQ system including the new BEBs to a part of the SciCRT in July 2015. The system has been operating since then. In this paper, we describe the development, the basic performance of the new BEB, the status after the installation in the SciCRT, and the future performance.

  5. A faster and more reliable data acquisition system for the full performance of the SciCRT

    Energy Technology Data Exchange (ETDEWEB)

    Sasai, Y., E-mail: sasaiyoshinori@isee.nagoya-u.ac.jp [Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan); Matsubara, Y.; Itow, Y.; Sako, T.; Kawabata, T.; Lopez, D.; Hikimochi, R.; Tsuchiya, A. [Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan); Ikeno, M.; Uchida, T.; Tanaka, M. [High Energy Accelerator Research Organization, KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Munakata, K.; Kato, C.; Nakamura, Y.; Oshima, T.; Koike, T. [Department of Physics, Shinshu University, Asahi, Matsumoto 390-8621 (Japan); Kozai, M. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Shibata, S.; Oshima, A.; Takamaru, H. [College of Engineering, Chubu University, Kasugai 487-8501 (Japan); and others

    2017-06-11

    The SciBar Cosmic Ray Telescope (SciCRT) is a massive scintillator tracker to observe cosmic rays at a very high-altitude environment in Mexico. The fully active tracker is based on the Scintillator Bar (SciBar) detector developed as a near detector for the KEK-to-Kamioka long-baseline neutrino oscillation experiment (K2K) in Japan. Since the data acquisition (DAQ) system was developed for the accelerator experiment, we determined to develop a new robust DAQ system to optimize it to our cosmic-ray experiment needs at the top of Mt. Sierra Negra (4600 m). One of our special requirements is to achieve a 10 times faster readout rate. We started to develop a new fast readout back-end board (BEB) based on 100 Mbps SiTCP, a hardware network processor developed for DAQ systems for high energy physics experiments. Then we developed the new BEB which has a potential of 20 times faster than the current one in the case of observing neutrons. Finally we installed the new DAQ system including the new BEBs to a part of the SciCRT in July 2015. The system has been operating since then. In this paper, we describe the development, the basic performance of the new BEB, the status after the installation in the SciCRT, and the future performance.

  6. The ngdp framework for data acquisition systems

    OpenAIRE

    Isupov, A. Yu.

    2010-01-01

    The ngdp framework is intended to provide a base for the data acquisition (DAQ) system software. The ngdp's design key features are: high modularity and scalability; usage of the kernel context (particularly kernel threads) of the operating systems (OS), which allows to avoid preemptive scheduling and unnecessary memory--to--memory copying between contexts; elimination of intermediate data storages on the media slower than the operating memory like hard disks, etc. The ngdp, having the above ...

  7. A multi-chip data acquisition system based on a heterogeneous system-on-chip platform

    CERN Document Server

    Fiergolski, Adrian

    2017-01-01

    The Control and Readout Inner tracking BOard (CaRIBOu) is a versatile readout system targeting a multitude of detector prototypes. It profits from the heterogeneous platform of the Zynq System-on-Chip (SoC) and integrates in a monolithic device front-end FPGA resources with a back-end software running on a hard-core ARM-based processor. The user-friendly Linux terminal with the pre-installed DAQ software is combined with the efficiency and throughput of a system fully implemented in the FPGA fabric. The paper presents the design of the SoC-based DAQ system and its building blocks. It also shows examples of the achieved functionality for the CLICpix2 readout ASIC.

  8. Development of MATLAB software to control data acquisition from a multichannel systems multi-electrode array.

    Science.gov (United States)

    Messier, Erik

    2016-08-01

    A Multichannel Systems (MCS) microelectrode array data acquisition (DAQ) unit is used to collect multichannel electrograms (EGM) from a Langendorff perfused rabbit heart system to study sudden cardiac death (SCD). MCS provides software through which data being processed by the DAQ unit can be displayed and saved, but this software's combined utility with MATLAB is not very effective. MCSs software stores recorded EGM data in a MathCad (MCD) format, which is then converted to a text file format. These text files are very large, and it is therefore very time consuming to import the EGM data into MATLAB for real-time analysis. Therefore, customized MATLAB software was developed to control the acquisition of data from the MCS DAQ unit, and provide specific laboratory accommodations for this study of SCD. The developed DAQ unit control software will be able to accurately: provide real time display of EGM signals; record and save EGM signals in MATLAB in a desired format; and produce real time analysis of the EGM signals; all through an intuitive GUI.

  9. Performance comparison of next generation controller and MPC in real time for a SISO process with low cost DAQ unit

    Directory of Open Access Journals (Sweden)

    V. Bagyaveereswaran

    2016-09-01

    Full Text Available In this paper, a brief overview of real time implementation of next generation Robust, Tracking, Disturbance rejecting, Aggressive (RTDA controller and Model Predictive Control (MPC is provided. The control algorithm is implemented through MATLAB. The plant model used in controller design is obtained using system identification tool and integral response method. The controller model is developed in Simulink using jMPC tool, which will be executed in real time. The outputs obtained are tested for various constraint values to obtain the desirable results. The implementation of Hardware in Loop is done by interfacing it with MATLAB using Arduino as data acquisition unit. The performance of RTDA is compared with those of MPC and Proportional Integral controller.

  10. A Labview Based Leakage Current Monitoring System For HV Insulators

    Directory of Open Access Journals (Sweden)

    N. Mavrikakis

    2015-10-01

    Full Text Available In this paper, a Labview based leakage current monitoring system for High Voltage insulators is described. The system uses a general purpose DAQ system with the addition of different current sensors. The DAQ system consists of a chassis and hot-swappable modules. Through the proper design of current sensors, low cost modules operating with a suitable input range can be employed. Fully customizable software can be developed using Labview, allowing on-demand changes and incorporation of upgrades. Such a system provides a low cost alternative to specially designed equipment with the added advantage of maximum flexibility. Further, it can be modified to satisfy the specifications (technical and economical set under different scenarios. In fact, the system described in this paper has already been installed in the HV Lab of the TEI of Crete whereas a variation of it is currently in use in TALOS High Voltage Test Station.

  11. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  12. The attitudes and beliefs of Pakistani medical practitioners about depression: a cross-sectional study in Lahore using the Revised Depression Attitude Questionnaire (R-DAQ

    Directory of Open Access Journals (Sweden)

    Mark Haddad

    2016-10-01

    Full Text Available Abstract Background Mental disorders such as depression are common and rank as major contributors to the global burden of disease. Condition recognition and subsequent management of depression is variable and influenced by the attitudes and beliefs of clinicians as well as those of patients. Most studies examining health professionals’ attitudes have been conducted in Western nations; this study explores beliefs and attitudes about depression among doctors working in Lahore, Pakistan. Methods A cross-sectional survey conducted in 2015 used a questionnaire concerning demographics, education in psychiatry, beliefs about depression causes, and attitudes about depression using the Revised Depression Attitude Questionnaire (R-DAQ. A convenience sample of 700 non-psychiatrist medical practitioners based in six hospitals in Lahore was approached to participate in the survey. Results Six hundred and one (86 % of the doctors approached consented to participate; almost all respondents (99 % endorsed one of various biopsychosocial causes of depression (38 to 79 % for particular causes, and 37 % (between 13 and 19 % for particular causes noted that supernatural forces could be responsible. Supernatural causes were more commonly held by female doctors, those working in rural settings, and those with greater psychiatry specialist education. Attitudes to depression were mostly less confident or optimistic and less inclined to a generalist perspective than those of clinicians in the UK or European nations, and deterministic perspectives that depression is a natural part of aging or due to personal failings were particularly common. However, there was substantial confidence in the efficacy of antidepressants and psychological therapy. More confident and therapeutically optimistic views and a more generalist perspective about depression management were associated with a rejection of supernatural explanations of the origin of depression. Conclusions Non

  13. The attitudes and beliefs of Pakistani medical practitioners about depression: a cross-sectional study in Lahore using the Revised Depression Attitude Questionnaire (R-DAQ).

    Science.gov (United States)

    Haddad, Mark; Waqas, Ahmed; Qayyum, Wahhaj; Shams, Maryam; Malik, Saad

    2016-10-18

    Mental disorders such as depression are common and rank as major contributors to the global burden of disease. Condition recognition and subsequent management of depression is variable and influenced by the attitudes and beliefs of clinicians as well as those of patients. Most studies examining health professionals' attitudes have been conducted in Western nations; this study explores beliefs and attitudes about depression among doctors working in Lahore, Pakistan. A cross-sectional survey conducted in 2015 used a questionnaire concerning demographics, education in psychiatry, beliefs about depression causes, and attitudes about depression using the Revised Depression Attitude Questionnaire (R-DAQ). A convenience sample of 700 non-psychiatrist medical practitioners based in six hospitals in Lahore was approached to participate in the survey. Six hundred and one (86 %) of the doctors approached consented to participate; almost all respondents (99 %) endorsed one of various biopsychosocial causes of depression (38 to 79 % for particular causes), and 37 % (between 13 and 19 % for particular causes) noted that supernatural forces could be responsible. Supernatural causes were more commonly held by female doctors, those working in rural settings, and those with greater psychiatry specialist education. Attitudes to depression were mostly less confident or optimistic and less inclined to a generalist perspective than those of clinicians in the UK or European nations, and deterministic perspectives that depression is a natural part of aging or due to personal failings were particularly common. However, there was substantial confidence in the efficacy of antidepressants and psychological therapy. More confident and therapeutically optimistic views and a more generalist perspective about depression management were associated with a rejection of supernatural explanations of the origin of depression. Non-psychiatrist medical practitioners in Pakistan hold a range of views

  14. Design and implementation of the wireless high voltage control system

    International Nuclear Information System (INIS)

    Srivastava, Saurabh; Misra, A.; Pandey, H.K.; Thakur, S.K.; Pandit, V.S.

    2011-01-01

    In this paper we will describe the implementation of the wireless link for controlling and monitoring the serial data between control PC and the interface card (general DAQ card), by replacing existing RS232 based remote control system for controlling and monitoring High Voltage Power Supply (120kV/50mA). The enhancement in the reliability is achieved by replacing old RS232 based control system with wireless system by isolating ground loop. (author)

  15. DAQExpert - An expert system to increase CMS data-taking efficiency

    CERN Document Server

    Andre, Jean-marc Olivier; Branson, James; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Gladki, Maciej Szymon; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Lettrich, Michael; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orn, Samuel Johan; Orsini, Luciano; Papakrivopoulos, Ioannis; Paus, Christoph Maria Ernst; Petrova, Petia; Petrucci, Andrea; Pieri, Marco; Rabady, Dinyar Sebastian; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Vougioukas, Michail; Zejdl, Petr

    2017-01-01

    The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A~significant factor affecting the data taking efficiency is the experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of data-taking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a~web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowl...

  16. Data Acquisition System for Electron Energy Loss Coincident Spectrometers

    International Nuclear Information System (INIS)

    Zhang Chi; Yu Xiaoqi; Yang Tao

    2005-01-01

    A Data Acquisition System (DAQ) for electron energy loss coincident spectrometers (EELCS) has been developed. The system is composed of a Multiplex Time-Digital Converter (TDC) that measures the flying time of positive and negative ions and a one-dimension position-sensitive detector that records the energy loss of scattering electrons. The experimental data are buffered in a first-in-first-out (FIFO) memory module, then transferred from the FIFO memory to PC by the USB interface. The DAQ system can record the flying time of several ions in one collision, and allows of different data collection modes. The system has been demonstrated at the Electron Energy Loss Coincident Spectrometers at the Laboratory of Atomic and Molecular Physics, USTC. A detail description of the whole system is given and experimental results shown

  17. Reviews Toy: Air swimmers Book: Their Arrows will Darken the Sun: The Evolution and Science of Ballistics Book: Physics Experiments for your Bag Book: Quantum Physics for Poets Equipment: SEP colour wheel kit Equipment: SEP colour mixing kit Software: USB DrDAQ App: iHandy Level Equipment: Photonics Explorer kit Web Watch

    Science.gov (United States)

    2012-01-01

    WE RECOMMEND Air swimmers Helium balloon swims like a fish Their Arrows will Darken the Sun: The Evolution and Science of Ballistics Ballistics book hits the spot Physics Experiments for your Bag Handy experiments for your lessons Quantum Physics for Poets Book shows the economic importance of physics SEP colour wheel kit Wheels investigate colour theory SEP colour mixing kit Cheap colour mixing kit uses red, green and blue LEDs iHandy Level iPhone app superbly measures angles Photonics Explorer kit Free optics kit given to schools WORTH A LOOK DrDAQ DrDAQ software gets an upgrade WEB WATCH Websites show range of physics

  18. The Trigger System of the CMS Experiment

    OpenAIRE

    Felcini, Marta

    2008-01-01

    We give an overview of the main features of the CMS trigger and data acquisition (DAQ) system. Then, we illustrate the strategies and trigger configurations (trigger tables) developed for the detector calibration and physics program of the CMS experiment, at start-up of LHC operations, as well as their possible evolution with increasing luminosity. Finally, we discuss the expected CPU time performance of the trigger algorithms and the CPU requirements for the event filter farm at start-up.

  19. A System for Exchanging Control and Status Messages in the NOvA Data Acquisition

    International Nuclear Information System (INIS)

    Biery, K.A.; Cooper, R.G.; Foulkes, S.C.; Guglielmo, G.M.; Piccoli, L.P.; Votava, M.E.V.; Fermilab

    2007-01-01

    In preparation for NOvA, a future neutrino experiment at Fermilab, we are developing a system for passing control and status messages in the data acquisition system. The DAQ system will consist of applications running on approximately 450 nodes. The message passing system will use a publish-subscribe model and will provide support for sending messages and receiving the associated replies. Additional features of the system include a layered architecture with custom APIs tailored to the needs of a DAQ system, the use of an open source messaging system for handling the reliable delivery of messages, the ability to send broadcasts to groups of applications, and APIs in Java, C++, and Python. Our choice for the open source system to deliver messages is EPICS. We will discuss the architecture of the system, our experience with EPICS, and preliminary test results

  20. A System for Exchanging Control and Status Messages in the NOvA Data Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Biery, K.A.; Cooper, R.G.; Foulkes, S.C.; Guglielmo, G.M.; Piccoli, L.P.; Votava, M.E.V.; /Fermilab

    2007-04-01

    In preparation for NOvA, a future neutrino experiment at Fermilab, we are developing a system for passing control and status messages in the data acquisition system. The DAQ system will consist of applications running on approximately 450 nodes. The message passing system will use a publish-subscribe model and will provide support for sending messages and receiving the associated replies. Additional features of the system include a layered architecture with custom APIs tailored to the needs of a DAQ system, the use of an open source messaging system for handling the reliable delivery of messages, the ability to send broadcasts to groups of applications, and APIs in Java, C++, and Python. Our choice for the open source system to deliver messages is EPICS. We will discuss the architecture of the system, our experience with EPICS, and preliminary test results.

  1. Core component integration tests for the back-end software sub-system in the ATLAS data acquisition and event filter prototype -1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    2000-01-01

    The ATLAS data acquisition (DAQ) and Event Filter (EF) prototype -1 project was intended to produce a prototype system for evaluating candidate technologies and architectures for the final ATLAS DAQ system on the LHC accelerator at CERN. Within the prototype project, the back-end sub-system encompasses the software for configuring, controlling and monitoring the DAQ. The back-end sub-system includes core components and detector integration components. The core components provide the basic functionality and had priority in terms of time-scale for development in order to have a baseline sub-system that can be used for integration with the data-flow sub-system and event filter. The following components are considered to be the core of the back-end sub-system: - Configuration databases, describe a large number of parameters of the DAQ system architecture, hardware and software components, running modes and status; - Message reporting system (MRS), allows all software components to report messages to other components in the distributed environment; - Information service (IS) allows the information exchange for software components; - Process manager (PMG), performs basic job control of software components (start, stop, monitoring the status); - Run control (RC), controls the data taking activities by coordinating the operations of the DAQ sub-systems, back-end software and external systems. Performance and scalability tests have been made for individual components. The back-end subsystem integration tests bring together all the core components and several trigger/DAQ/detector integration components to simulate the control and configuration of data taking sessions. For back-end integration tests a test plan was provided. The tests have been done using a shell script that goes through different phases as follows: - starting the back-end server processes to initialize communication services and PMG; - launching configuration specific processes via DAQ supervisor as

  2. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Claus, R.; ATLAS Collaboration

    2016-07-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  3. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    International Nuclear Information System (INIS)

    Claus, R.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013–2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  4. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.

  5. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    International Nuclear Information System (INIS)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R.T.; Huffer, M.; Kocian, M.; Ruckman, L.; Russell, J.; Su, D.; Wittgen, M.; Iakovidis, G.; Iordanidou, K.; Moschovakos, P.; Ntekas, K.; Kwan, K.; Lankford, A.J.; Nelson, A.; Schernau, M.; Schlenker, S.; Valderanis, C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2

  6. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Energy Technology Data Exchange (ETDEWEB)

    Claus, R., E-mail: claus@slac.stanford.edu

    2016-07-11

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013–2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  7. Integrated graphical user interface for the back-end software sub-system

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2001-01-01

    The ATLAS data acquisition and Event Filter prototype '-1' project was intended to produce a prototype system for evaluating candidate technologies and architectures for the final ATLAS DAQ system on the LHC accelerator at CERN. Within the prototype project, the back-end sub-system encompasses the software for configuring, controlling and monitoring the data acquisition (DAQ). The back-end sub-system includes core components and detector integration components. One of the detector integration components is the Integrated Graphical User Interface (IGUI), which is intended to give a view of the status of the DAQ system and its sub-systems (Dataflow, Event Filter and Back-end) and to allow the user (general users, such as a shift operator at a test beam or experts, in order to control and debug the DAQ system) to control its operation. The IGUI is intended to be a Status Display and a Control Interface too, so there are three groups of functional requirements: display requirements (the information to be displayed); control requirements (the actions the IGUI shall perform on the DAQ components); general requirements, applying to the general functionality of the IGUI. The constraint requirements include requirements related to the access control (shift operator or expert user). The quality requirements are related to the portability on different platforms. The IGUI has to interact with many components in a distributed environment. The following design guidelines have been considered in order to fulfil the requirements: use a modular design with easy possibility to integrate different sub-systems; use Java language for portability and powerful graphical features; use CORBA interfaces for communication with other components. The actual implementation of Back-end software components use Inter-Language Unification (ILU) for inter-process communication. Different methods of access of Java applications to ILU C++ servers have been evaluated (native methods, ILU Java support

  8. New Communication Network Protocol for a Data Acquisition System

    Science.gov (United States)

    Uchida, T.; Fujii, H.; Nagasaka, Y.; Tanaka, M.

    2006-02-01

    An event builder based on communication networks has been used in high-energy physics experiments, and various networks have been adopted, for example, IEEE 802.3 (Ethernet), asynchronous transfer mode (ATM), and so on. In particular, Ethernet is widely used because its infrastructure is very cost effective. Many systems adopt standard protocols that are designed for a general network. However, in the case of an event builder, the communication pattern between stations is different from that in a general network. The unique communication pattern causes congestion, and thus makes it difficulty to quantitatively design the network. To solve this problem, we have developed a simple network protocol for a data acquisition (DAQ) system. The protocol is designed to keep the sequence of senders so that no congestion occurs. We implemented the protocol on a small hardware component [a field programmable gate array (FPGA)] and measured the performance, so that it will be ready for a generic DAQ system

  9. A PC-Linux-based data acquisition system for the STAR TOFp detector

    International Nuclear Information System (INIS)

    Liu Zhixu; Liu Feng; Zhang Bingyun

    2003-01-01

    Commodity hardware running the open source operating system Linux is playing various important roles in the field of high energy physics. This paper describes the PC-Linux-based Data Acquisition System of STAR TOFp detector. It is based on the conventional solutions with front-end electronics made of NIM and CAMAC modules controlled by a PC running Linux. The system had been commissioned into the STAR DAQ system, and worked successfully in the second year of STAR physics runs

  10. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    AUTHOR|(SzGeCERN)696050; Garelli, N.; Herbst, R.T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A.J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Bartoldus, R.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambe...

  11. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    ATLAS CSC Collaboration; The ATLAS collaboration

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgrade during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chamber...

  12. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    AUTHOR|(SzGeCERN)664042

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thr...

  13. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    Claus, Richard; The ATLAS collaboration

    2015-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thro...

  14. Designing a Signal Conditioning System with Software Calibration for Resistor-feedback Patch Clamp Amplifier.

    Science.gov (United States)

    Hu, Gang; Zhu, Quanhui; Qu, Anlian

    2005-01-01

    In this paper, a programmable signal conditioning system based on software calibration for resistor-feedback patch clamp amplifier (PCA) has been described, this system is mainly composed of frequency correction, programmable gain and filter whose parameters are configured by software automatically to minimize the errors, A lab-designed data acquisition system (DAQ) is used to implement data collections and communications with PC. The laboratory test results show good agreement with design specifications.

  15. A firmware implementation of a Quad HOLA S-LINK to PCI Express interface for use in the ATLAS Trigger DAQ system

    CERN Document Server

    Slenders, Daniel

    2014-01-01

    The firmware for a PCI Express interface card with four on-board high-speed optical S-LINKS (FILAREXPRESS) has been developed. This was done for an Altera Stratix II GX FPGA. Furthermore, detection of the available channels through a pull-up resistor and a readout of the on-board temperature sensor were implemented.

  16. Operational experience with the CMS Data Acquisition System

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources and 10^8 electronic channels. An overview of the architecture and design of the hardware and software of the DAQ system is given. We report on the performance and operational experience of the DAQ and its Run Control System in the first two years of collider run of the LHC, both in proton-proton and Pb-Pb collisions. We present an analysis of the current performance, its limitations, and the most common failure modes and discuss the ongoing evolution of the HLT capability needed to match the luminosity ramp-up of the LHC.

  17. Position-controlled data acquisition embedded system for magnetic NDE of bridge stay cables.

    Science.gov (United States)

    Maldonado-Lopez, Rocio; Christen, Rouven

    2011-01-01

    This work presents a custom-tailored sensing and data acquisition embedded system, designed to be integrated in a new magnetic NDE inspection device under development at Empa, a device intended for routine testing of large diameter bridge stay cables. The data acquisition (DAQ) system fulfills the speed and resolution requirements of the application and is able to continuously capture and store up to 2 GB of data at a sampling rate of 27 kS/s, with 12-bit resolution. This paper describes the DAQ system in detail, including both hardware and software implementation, as well as the key design challenges and the techniques employed to meet the specifications. Experimental results showing the performance of the system are also presented.

  18. The Data Acquisition and Calibration System for the ATLAS Semiconductor Tracker

    CERN Document Server

    Abdesselam, A; Barr, A J; Bell, P; Bernabeu, J; Butterworth, J M; Carter, J R; Carter, A A; Charles, E; Clark, A; Colijn, A P; Costa, M J; Dalmau, J M; Demirkoz, B; Dervan, P J; Donega, M; D'Onifrio, M; Escobar, C; Fasching, D; Ferguson, D P S; Ferrari, P; Ferrère, D; Fuster, J; Gallop, B; García, C; González, S; González-Sevilla, S; Goodrick, M J; Gorisek, A; Greenall, A; Grillo, A A; Hessey, N P; Hill, J C; Jackson, J N; Jared, R C; Johannson, P D C; de Jong, P; Joseph, J; Lacasta, C; Lane, J B; Lester, C G; Limper, M; Lindsay, S W; McKay, R L; Magrath, C A; Mangin-Brinet, M; Martí i García, S; Mellado, B; Meyer, W T; Mikulec, B; Minano, M; Mitsou, V A; Moorhead, G; Morrissey, M; Paganis, E; Palmer, M J; Parker, M A; Pernegger, H; Phillips, A; Phillips, P W; Postranecky, M; Robichaud-Véronneau, A; Robinson, D; Roe, S; Sandaker, H; Sciacca, F; Sfyrla, A; Stanecka, E; Stapnes, S; Stradling, A; Tyndel, M; Tricoli, A; Vickey, T; Vossebeld, J H; Warren, M R M; Weidberg, A R; Wells, P S; Wu, S L

    2008-01-01

    The SemiConductor Tracker (SCT) data acquisition (DAQ) system will calibrate, configure, and control the approximately six million front-end channels of the ATLAS silicon strip detector. It will provide a synchronized bunch-crossing clock to the front-end modules, communicate first-level triggers to the front-end chips, and transfer information about hit strips to the ATLAS high-level trigger system. The system has been used extensively for calibration and quality assurance during SCT barrel and endcap assembly and for performance confirmation tests after transport of the barrels and endcaps to CERN. Operating in data-taking mode, the DAQ has recorded nearly twenty million synchronously-triggered events during commissioning tests including almost a million cosmic ray triggered events. In this paper we describe the components of the data acquisition system, discuss its operation in calibration and data-taking modes and present some detector performance results from these tests.

  19. The data acquisition and calibration system for the ATLAS Semiconductor Tracker

    International Nuclear Information System (INIS)

    Abdesselam, A; Barr, A J; Demirkoez, B; Barber, T; Carter, J R; Bell, P; Bernabeu, J; Costa, M J; Escobar, C; Butterworth, J M; Carter, A A; Dalmau, J M; Charles, E; Fasching, D; Ferguson, D P S; Clark, A; Donega, M; D'Onifrio, M; Colijn, A-P; Dervan, P J

    2008-01-01

    The SemiConductor Tracker (SCT) data acquisition (DAQ) system will calibrate, configure, and control the approximately six million front-end channels of the ATLAS silicon strip detector. It will provide a synchronized bunch-crossing clock to the front-end modules, communicate first-level triggers to the front-end chips, and transfer information about hit strips to the ATLAS high-level trigger system. The system has been used extensively for calibration and quality assurance during SCT barrel and endcap assembly and for performance confirmation tests after transport of the barrels and endcaps to CERN. Operating in data-taking mode, the DAQ has recorded nearly twenty million synchronously-triggered events during commissioning tests including almost a million cosmic ray triggered events. In this paper we describe the components of the data acquisition system, discuss its operation in calibration and data-taking modes and present some detector performance results from these tests

  20. The Data Acquisition System for a Kinetic Inductance Detector

    International Nuclear Information System (INIS)

    Branchini, P; Budano, A; Capasso, L; Marchetti, D

    2015-01-01

    The Data Acquisition System (DAQ) and the Front-End electronics for an array of Kinetic Inductance Detectors (KIDs) are described. KIDs are superconductive detectors, in which electrons are organized in Cooper pairs. Any incident radiation could break a pair generating a couple of quasi-particles that increase the inductance of the detector. The DAQ system we developed is a hardware/software co-design, based on state machines and on a microprocessor embedded into an FPGA. A commercial DAC/ADC board is used to interface the FPGA to the array of KIDs. The DAQ system generates a Stimulus signal suitable for an array of up to 128 KIDs. Such signal is up-mixed with a 3 GHz carrier wave and it then excites the KIDs array. The read-out signal from the detector is down-mixed with respect to the 3 GHz sine wave and recovered Stimulus is read back by the ADC device. The microprocessor stores read out data via a PCI express bus (PCIe) into an external disk. It also computes the Fast Fourier Transform of the acquired read out signal: this allows extrapolating which KID interacted and the energy of the impinging radiation. Simulations and tests have been performed successfully and experimental results are presented. (paper)

  1. Tests of the data acquisition system and detector control system for the muon chambers of the CMS experiment at the LHC

    CERN Document Server

    Sowa, Michael Christian

    The Phys. Inst. III A of RWTH Aachen University is involved in the development, production and tests of the Drift Tube (DT) muon chambers for the barrel muon system of the CMS detector at the LHC at CERN (Geneva). The thesis describes some test procedures which were developed and performed for the chamber local Data Acquisition (DAQ) system, as well as for parts of the Detector Control System (DCS). The test results were analyzed and discussed. Two main kinds of DAQ tests were done. On the one hand, to compare two different DAQ systems, the chamber signals were split and read out by both systems. This method allowed to validate them by demonstrating, that there were no relevant differences in the measured drift times, generated by the same muon event in the same chamber cells. On the other hand, after the systems were validated, the quality of the data was checked. For this purpose extensive noise studies were performed. The noise dependence on various parameters (threshold, HV) was investigated quantitativel...

  2. Single event monitoring system based on Java 3D and XML data binding

    International Nuclear Information System (INIS)

    Wang Liang; Chinese Academy of Sciences, Beijing; Zhu Kejun; Zhao Jingwei

    2007-01-01

    Online single event monitoring is important to BESIII DAQ System. Java3D is extension of Java Language in 3D technology, XML data binding is more efficient to handle XML document than SAX and DOM. This paper mainly introduce the implementation of BESIII single event monitoring system with Java3D and XML data binding, and interface for track fitting software with JNI technology. (authors)

  3. Development of Data Acquisition System for nuclear thermal hydraulic out-of-pile facility using the graphical programming methods

    Energy Technology Data Exchange (ETDEWEB)

    Bouaichaoui, Youcef; Berrahal, Abderezak; Halbaoui, Khaled [Birine Nuclear Research Center/CRNB/COMENA/ALGERIA, BO 180, Ain Oussera, 17200, Djelfa (Algeria)

    2015-07-01

    This paper describes the design of data acquisition system (DAQ) that is connected to a PC and development of a feedback control system that maintains the coolant temperature of the process at a desired set point using a digital controller system based on the graphical programming language. The paper will provide details about the data acquisition unit, shows the implementation of the controller, and present test results. (authors)

  4. A simple timestamping data acquisition system for ToF-ERDA

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, Mikko, E-mail: mikrossi@jyu.fi; Rahkila, Panu; Kettunen, Heikki; Laitinen, Mikko

    2015-03-15

    A new data acquisition system, ToF-DAQ, has been developed for a ToF-ERDA telescope and other ToF-E and ToF–ToF measurement systems. ToF-DAQ combines an analogue electronics front-end to asynchronous time stamped data acquisition by means of a FPGA device. Coincidences are sought solely in software based on the timestamps. Timestamping offers more options for data analysis as coincidence events can be built also in offline analysis. The system utilizes a National Instruments R-series FPGA device and a Windows PC as a host computer. Both the FPGA code and the host software were developed using the National Instruments LabVIEW graphical programming environment. Up to eight NIM ADCs can be handled by a single FPGA. The host computer and the FPGA can process total continuous count rates of over 750,000 counts/s with a timestamping resolution of 8.33 ns.

  5. Workshop on data acquisition and trigger system simulations for high energy physics

    International Nuclear Information System (INIS)

    1992-01-01

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit ampersand The Design of a Queue for this Circuit; Fast Data Compression ampersand Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ ampersand Online Processing at the SSC; Planned Enhancements to MODSEM II ampersand SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies

  6. Workshop on data acquisition and trigger system simulations for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.

  7. A modular and extensible data acquisition and control system for testing superconducting magnets

    International Nuclear Information System (INIS)

    Darryl F. Orris and Ruben H. Carcagno

    2001-01-01

    The Magnet Test Facility at Fermilab tests a variety of full-scale and model superconducting magnets for both R and D and production. As the design characteristics and test requirements of these magnets vary widely, the magnet test stand must accommodate a wide range of Data Acquisition (DAQ) and Control requirements. Such a system must provide several functions, which includes: quench detection, quench protection, power supply control, quench characterization, and slow DAQ of temperature, mechanical strain gauge, liquid helium level, etc. The system must also provide cryogenic valve control, process instrumentation monitoring, and process interlock logic associated with the test stand. A DAQ and Control system architecture that provides the functionality described above has been designed, fabricated, and put into operation. This system utilizes a modular approach that provides both extensibility and flexibility. As a result, the complexity of the hardware is minimized while remaining optimized for future expansion. The architecture of this new system is presented along with a description of the different technologies applied to each module. Commissioning and operating experience as well as plans for future expansion are discussed

  8. Data acquisition system for steady state experiments at multi-sites

    International Nuclear Information System (INIS)

    Nakanishi, H.; Emoto, M.; Nagayama, Y.

    2010-11-01

    A high-performance data acquisition system (LABCOM system) has been developed for steady state fusion experiments in Large Helical Device (LHD). The most important characteristics of this system are the 110 MB/s high-speed real-time data acquisition capability and also the scalability on its performance by using unlimited number of data acquisition (DAQ) units. It can also acquire experimental data from multiple remote sites through the 1 Gbps fusion-dedicated virtual private network (SNET) in Japan. In LHD steady-state experiments, the DAQ cluster has established the world record of acquired data amount of 90 GB/shot which almost reaches the ITER data estimate. Since all the DAQ, storage, and data clients of LABCOM system are distributed on the local area network (LAN), remote experimental data can be also acquired simply by extending the LAN to the wide-area SNET. The speed lowering problem in long-distance TCP/IP data transfer has been improved by using an optimized congestion control and packet pacing method. Japan-France and Japan-US network bandwidth tests have revealed that this method actually utilize 90% of ideal throughput in both cases. Toward the fusion goal, a common data access platform is indispensable so that detailed physics data can be easily compared between multiple large and small experiments. The demonstrated bilateral collaboration scheme will be analogous to that of ITER and the supporting machines. (author)

  9. Labview Based ECG Patient Monitoring System for Cardiovascular Patient Using SMTP Technology

    OpenAIRE

    Singh, Om Prakash; Mekonnen, Dawit; Malarvili, M. B.

    2015-01-01

    This paper leads to developing a Labview based ECG patient monitoring system for cardiovascular patient using Simple Mail Transfer Protocol technology. The designed device has been divided into three parts. First part is ECG amplifier circuit, built using instrumentation amplifier (AD620) followed by signal conditioning circuit with the operation amplifier (lm741). Secondly, the DAQ card is used to convert the analog signal into digital form for the further process. Furthermore, the data has ...

  10. Prototyping a 10Gigabit-Ethernet Event-Builder for a Cherenkov Telescope Array

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present the prototyping of a 10Gigabit-Ethernet based UDP data acquisition (DAQ) system that has been conceived in the context of the Array and Control group of CTA (Cherenkov Telescope Array). The CTA consortium plans to build the next generation ground-based gamma-ray instrument, with approximately 100 telescopes of at least three different sizes installed on two sites. The genuine camera dataflow amounts to 1.2 GByte/s per camera. We have conceived and built a prototype of a front-end event builder DAQ able to receive and compute such a data rate, allowing a more sustainable level for the central data logging of the site by data reduction. We took into account characteristics and constraints of several camera electronics projects in CTA, thus keeping a generic approach to all front-end types. The big number of telescopes and the remoteness of the array sites imply that any front-end element must be robust and self-healing to a large extent. The main difficulty is to combine very high performances with a...

  11. The IFR Online Detector Control system at the BaBar Experiment

    International Nuclear Information System (INIS)

    Paolucci, Pierluigi

    1999-01-01

    The Instrumented Flux Return (IFR)[1] is one of the five subdetectors of the BaBar[2] experiment on the PEP II accelerator at SLAC. The IFR consists of 774 Resistive Plate Chamber (RPC) detectors, covering an area of about 2,000 m 2 and equipped with 3,000 Front-end Electronic Cards (FEC) reading about 50,000 channels (readout strips). The first aim of a B-factory experiment is to run continuously without any interruption and then the Detector Control system plays a very important role in order to reduce the dead-time due to the hardware problems. The I.N.F.N. group of Naples has designed and built the IFR Online Detector Control System (IODC)[3] in order to control and monitor the operation of this large number of detectors and of all the IFR subsystems: High Voltage, Low Voltage, Gas system, Trigger and DAQ crates. The IODC consists of 8 custom DAQ stations, placed around the detector and one central DAQ station based on VME technology and placed in electronic house. The IODC use VxWorks and EPICS to implement slow control data flow of about 2500 hardware channels and to develop part of the readout module consisting in about 3500 records. EPICS is interfaced with the BaBar Run Control through the Component Proxy and with the BaBar database (Objectivity) through the Archiver and KeyLookup processes

  12. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments; Reseau a multiplexage statistique pour les systemes de selection et de reconstruction d'evenements dans les experiences de physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Calvet, D

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers ({approx}1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  13. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Calvet, D.

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers (∼1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  14. Fast Data Acquisition system based on NI-myRIO board with GPS time stamping capabilities for atmospheric electricity research

    International Nuclear Information System (INIS)

    Pokhsraryan, D.

    2016-01-01

    In the investigation of the fast physical processes, such as propagation of a lightning leader and detection of the correspondent radio emission waveforms, it is crucial to synchronize the corresponding signals in order to be able to create a model of the lightning initiation. Therefore, the DAQ system should be equipped with a GPS synchronization capability. In the presented report, we describe the DAQ system based on a NI-myRio board that provides detection of particle fluxes, the near-surface electric field disturbances and waveforms of radio signals from atmospheric discharges, all synchronized with an accuracy of tens of nanoseconds. The results of the first measurements made at Aragats high-altitude station of Yerevan Physics Institute in Summer-Autumn 2015 are presented and discussed. (author)

  15. Advanced Operating System Technologies

    Science.gov (United States)

    Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro

    In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC

  16. Data acquisition system and link and data aggregator for the CALICE analogue hadron calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Caudron, Julien; Adam, Lennart; Bauss, Bruno; Buescher, Volker; Chau, Phi; Degele, Reinhold; Geib, Karl-Heinrich; Krause, Sascha; Liu, Yong; Masetti, Lucia; Schaefer, Ulrich; Spreckels, Rouven; Tapprogge, Stefan; Wanke, Rainer [Johannes-Gutenberg Universitaet, Mainz (Germany); Collaboration: CALICE-D-Collaboration

    2015-07-01

    The Analogue Hadron Calorimeter (AHCAL) is one of the several calorimeter designs developed by the CALICE collaboration for future linear colliders. It is a high granularity sampling calorimeter with plastic scintillator tiles of 3 x 3 cm{sup 2}, adding up to ∝8'000'000 sensors. This large amount of channels requires a powerful data acquisition system (DAQ). In this DAQ system, the Link and Data Aggregator module (LDA) acts as an intermediate component to group together several layers units, dispatching control signals and merging data. A first LDA design (mini-LDA), intended to be flexible but limited to a small number of layers, has been successfully used during the end-of-the-year 2014 CERN Test Beam program. A second prototype (wing-LDA), compatible with a complete detector design, is operating during the Test Beam program of 2015. This talk will present the current status of the DAQ and the LDA, with recent results from Test Beam and future plans.

  17. A Distributed Data Acquisition System for the Sensor Network of the TAWARA_RTM Project

    Science.gov (United States)

    Fontana, Cristiano Lino; Donati, Massimiliano; Cester, Davide; Fanucci, Luca; Iovene, Alessandro; Swiderski, Lukasz; Moretto, Sandra; Moszynski, Marek; Olejnik, Anna; Ruiu, Alessio; Stevanato, Luca; Batsch, Tadeusz; Tintori, Carlo; Lunardon, Marcello

    This paper describes a distributed Data Acquisition System (DAQ) developed for the TAWARA_RTM project (TAp WAter RAdioactivity Real Time Monitor). The aim is detecting the presence of radioactive contaminants in drinking water; in order to prevent deliberate or accidental threats. Employing a set of detectors, it is possible to detect alpha, beta and gamma radiations, from emitters dissolved in water. The Sensor Network (SN) consists of several heterogeneous nodes controlled by a centralized server. The SN cyber-security is guaranteed in order to protect it from external intrusions and malicious acts. The nodes were installed in different locations, along the water treatment processes, in the waterworks plant supplying the aqueduct of Warsaw, Poland. Embedded computers control the simpler nodes, and are directly connected to the SN. Local-PCs (LPCs) control the more complex nodes that consist signal digitizers acquiring data from several detectors. The DAQ in the LPC is split in several processes communicating with sockets in a local sub-network. Each process is dedicated to a very simple task (e.g. data acquisition, data analysis, hydraulics management) in order to have a flexible and fault-tolerant system. The main SN and the local DAQ networks are separated by data routers to ensure the cyber-security.

  18. Components for the data acquisition system of the ATLAS testbeams 1996

    International Nuclear Information System (INIS)

    Caprini, M; Niculescu, Michaela

    1997-01-01

    ATLAS is one of the experiments developed at CERN for the Large Hadron Collider. For the sub-detector testbeams a data acquisition system (DAQ) was designed. The Bucharest group is a member of the ATLAS DAQ collaboration and contributed to the development of some components of the testbeam DAQ: -read-out modules for standalone and combined test-beams; - readout module for the liquid argon detector; - run control graphical user interface; - central data recording system. The readout module is able to acquire data event by event from the detector electronics and is based on a Finite State Machine (FSM) incorporating a general scheme for the calibration procedure. The FSM allows detectors to take data either in standalone mode, with local control and recording, or in combined mode together with other sub-detectors, with a very easy switching between the two different configurations. The readout module for the liquid argon detector is written as a data flow element which takes raw data and creates a formatted event. At initialization stage the run and detector parameters are read from the Run Control Parameters database. Then the state changes are driven by three interrupt signals (Start of Burst, Trigger, End of Burst) generated by hardware. In calibration mode at each trigger the event is built (calibration data are taken outside the beam) and then the conditions for the next calibration trigger are prepared (DAQ values, delays, pulsers). The graphical user interface is designed to be used for the control of the data acquisition system. The interface provides a global experiment panel for the activation and navigation in all the command and display panels. The user can start, stop or change the state of the system, obtain the most important information about the whole system states and activate other service programs in order to select parameters, databases and to display information about the evolution of the system. Central data recording system lays on the client

  19. Extending AAA operational model for profile-based access control in ethernet-based Neutral Access Networks

    NARCIS (Netherlands)

    Matias, J.; Jacob, E.; Demchenko, Y.; de Laat, C.; Gommans, L.; Macías López, E.M.; Bogliolo, A.; Perry, M.; Ran, M

    2010-01-01

    Neutral Access Networks (NAN) have appeared as a new model to overcome some restrictions and lack of flexibility that are present currently in broadband access networks. NAN brings new business opportunities by opening this market to new stakeholders. Although the NAN model is accepted, there are

  20. Performance of waveform digitizers as a compact data acquisition system for the ISMRAN experiment

    International Nuclear Information System (INIS)

    Mitra, A.; Netrakanti, P.K.; Kashyap, V.K.S.; Behera, S.P.; Jha, V.; Mishra, D.K.; Pant, L.M.

    2016-01-01

    The Indian Scintillator Matrix for Reactor Anti-Neutrino (ISMRAN) detector is proposed at the Dhruva reactor, BARC, to measure the anti-neutrinos (υ-bar ) for the purpose of reactor monitoring and sterile neutrino search. A one ton detector, consisting of 100 plastic scintillator bars (10cm x 10cm x 100cm), wrapped with the Gadolinium (Gd) coated mylar foils and coupled with photomultiplier tubes (PMT) at both ends, is planned for this purpose. One of the key components for such an experiment is the development of a dedicated and economical data acquisition system (DAQ) for the detector setup. The FPGA based waveform digitizers are suitable for this purpose, where data from a large number of detectors need to be read out simultaneously. This effectively reduces the burden of the intermediate conventional pulse processing electronics between the detectors and the DAQ. We have procured the CAEN made 16 channel, model V1730, 14bit, 500 MS/s VME based waveform digitizers for this purpose. A series of measurements have been carried out to evaluate the performance of the digitizers. We are also working on the related auxiliary software and data format to be used extensively for ISMRAN DAQ

  1. Controllable clock circuit design in PEM system

    International Nuclear Information System (INIS)

    Sun Yunhua; Wang Peihua; Hu Tingting; Feng Baotong; Shuai Lei; Huang Huan; Wei Shujun; Li Ke; Zhao Jingwei; Wei Long

    2011-01-01

    A high-precision synchronized clock circuit design will be presented, which can supply steady, reliable and anti-jamming clock signal for the data acquirement (DAQ) system of Positron Emission Mammography (PEM). This circuit design is based on the Single-Chip Microcomputer and high-precision clock chip, and can achieve multiple controllable clock signals. The jamming between the clock signals can be reduced greatly with the differential transmission. Meanwhile, the adoption of CAN bus control in the clock circuit can prompt the clock signals to be transmitted or masked simultaneously when needed. (authors)

  2. Controllable clock circuit design in PEM system

    International Nuclear Information System (INIS)

    Sun Yunhua; Wang Peilin; Hu Tingting; Feng Baotong; Shuai Lei; Huang Huan; Wei Shujun; Li Ke; Zhao Jingwei; Wei Long

    2010-01-01

    A high-precision synchronized clock circuit design will be presented, which can supply steady, reliable and anti-jamming clock signal for the data acquirement (DAQ) system of Positron Emission Mammography (PEM). This circuit design is based on the Single-Chip Microcomputer and high-precision clock chip, and can achieve multiple controllable clock signals. The jamming between the clock signals can be reduced greatly with the differential transmission. Meanwhile, the adoption of CAN bus control in the clock circuit can prompt the clock signals to be transmitted or masked simultaneously when needed. (authors)

  3. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments; Reseau a multiplexage statistique pour les systemes de selection et de reconstruction d'evenements dans les experiences de physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Calvet, D

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers ({approx}1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  4. High-Speed Data Acquisition and Digital Signal Processing System for PET Imaging Techniques Applied to Mammography

    Science.gov (United States)

    Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.

    2004-06-01

    This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.

  5. The MICE Online Systems

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Muon Ionization Cooling Experiment (MICE) is designed to test transverse cooling of a muon beam, demonstrating an important step along the path toward creating future high intensity muon beam facilities. Protons in the ISIS synchrotron impact a titanium target, producing pions which decay into muons that propagate through the beam line to the MICE cooling channel. Along the beam line, particle identification (PID) detectors, scintillating fiber tracking detectors, and beam diagnostic tools identify and measure individual muons moving through the cooling channel. The MICE Online Systems encompass all tools; including hardware, software, and documentation, within the MLCR (MICE Local Control Room) that allow the experiment to efficiently record high quality data. Controls and Monitoring (C&M), Data Acquisition (DAQ), Online Monitoring and Reconstruction, Data Transfer, and Networking all fall under the Online Systems umbrella. C&M controls all MICE systems including the target, conventional an...

  6. A control system upgrade of the spear synchrotron and injector

    International Nuclear Information System (INIS)

    Garrett, R.; Howry, S.; Wermelskirchen, C.; Yang, J.

    1995-11-01

    The SPEAR electron synchrotron is an old and venerable facility with a history of great physics. When this storage ring was converted to serve as a full-time synchrotron light source, it was evident that the facility was due for an overhaul of its control system. Outdated hardware interfaces, custom operator interfaces, and the control computer itself were replaced with off-the-shelf distributed intelligent controllers and networked X-workstations. However, almost all applications and control functions were retained by simply rewriting the layer of software closest to each new device. The success of this upgrade prompted us to do a similar upgrade of our Injector system. Although the Injector was already running an X-windows based control system, it was non-networked and Q-bus based. By using the same Ethernet based controllers that were used at SPEAR, we were able to integrate the two systems into one that resembles the ''standard model'' for control systems, and at the same time preserve the applications software that has been developed over the years on both systems

  7. Study on a conceptual design of a data acquisition and instrument control system for experimental suites at materials and life science facility (MLF) of J-PARC

    International Nuclear Information System (INIS)

    Nakajima, Kenji; Nakatani, Takeshi; Torii, Shuki; Higemoto, Wataru; Otomo, Toshiya

    2006-02-01

    The JAEA (Japan Atomic Energy Agency)-KEK (High Energy Accelerator Research Organization) joint project, Japan Proton Accelerator Research Complex (J-PARC), is now under construction. Materials and Life Science Facility (MLF) is one of planned facilities in this research complex. The neutron and muon sources will be installed at MLF and world's highest class intensive beam, which is utilized for variety of scientific research subject, will be delivered. To discuss the necessary computing environments for neutron and muon instruments at J-PARC, the MLF computing environment group (MLF-CEG) has been organized. We, members of the DAQ subgroup (DAQ-SG) are responsible for considering data acquisition and instrument control systems for the experimental suites at MLF. In the framework of the MLF-CEG, we are surveying the computer resources which is required for data acquisition and instrument control at future instruments, current situation of existing facilities and possible solutions those we can achieve. We are discussing the most suitable system that can bring out full performance of our instruments. This is the first interim report of the DAQ-SG, in which our activity of 2003-2004 is summarized. In this report, a conceptual design of the software, the related a data acquisition and instrument control system for experimental instruments at MLF are proposed. (author)

  8. Data acquisition and control system for SMARTEX – C

    Energy Technology Data Exchange (ETDEWEB)

    Yeole, Yogesh Govind, E-mail: yogesh@ipr.res.in [Institute for Plasma Research, Gandhinagar, 382 428 Gujarat (India); Lachhvani, Lavkesh; Bajpai, Manu; Rathod, Surendrasingh; Kumar, Abhijeet; Sathyanarayana, K.; Pujara, H.D. [Institute for Plasma Research, Gandhinagar, 382 428 Gujarat (India); Pahari, Sambaran [BARC, Vishakhapatanam, 530 012 Andhra Pradesh (India); Chattopadhyay, Prabal K. [Institute for Plasma Research, Gandhinagar, 382 428 Gujarat (India)

    2016-11-15

    Highlights: • We have developed control and data acquisition system for Nonneutral Plasma experiment named as SMARTEX – C. • The hardware of the system includes a high current power supply, a trigger circuit, a comparator circuit, a PXI system and a computer. • The software has been developed in LabVIEW{sup ®}. • We have presented the complete time synchronization of the operation of the system. • Results obtained from the equipment has been shown. - Abstract: A PXI based data acquisition system has been developed for Small Aspect Ratio Toroidal Experiment in C – shaped geometry (SMARTEX – C), a device to create and confine non-neutral plasma. The data acquisition system (DAQ) includes PXI based data acquisition cards, communication card, chassis, Optical fiber link, a dedicated computer, a trigger circuit (TC) and a voltage comparator. In this paper, we report the development of a comprehensive code in LabVIEW{sup ®} – 2012 software in order to control the operation of SMARTEX – C as well as to acquire the experimental data from it. The code has been incorporated with features like configuration of card parameters. A hardware based control sequence involving TC has also been developed and integrated with the DAQ. In the acquisition part, the data from an experimental shot is acquired when a digital pulse from one of the PXI cards triggers TC, which further triggers the TF – power supply and rest of the DAQ. The data hence acquired, is stored in the hard disc in binary format for further analysis.

  9. A data acquisition system based on general VME system in WinXP

    International Nuclear Information System (INIS)

    Ning Zhe; Qian Sen; Wang Yifang; Heng Yuekun; Zhang Jiawei; Fu Zaiwei; Qi Ming; Zheng Yangheng

    2010-01-01

    The compilation and encapsulation of a general data acquisition system based on VME board in WinXP environment was developed using LabVIEW with graphics interface. By integrating the emulational instrument panel of LabVIEW and calling the Dynamic Link Libraries (DLLs) of crate controller, the VME modules were encapsulated into function modules independently, for convenience of use. The BLT, MBLT and CBLT readout modes for different VME boards were studied. The modules can be selected and modified easily according to the requirements of different tests. Finally, successful applications of the high resolution data acquisition software (DAQ) in several experiment environments are reported.(authors)

  10. The project of autocontrol for CAEN high voltage systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Qian Sen; Wang Zhimin; Chinese Academy of Sciences, Beijing; Cai Xiao; Wang Yifang; Zhang Jiawen; Yang Changgen

    2008-01-01

    Based on TCP/IP network communication techniques, CAMAC Bus Technology, PCI Bus Technology and RS232 Serial Communication Technique, we developed and established a serial of software in Linux or Win32 system to auto control these high voltage systems made by CAEN Company, which are always used in high energy physics experiments. The operator can use this software to control and monitor the system independently, or encapsulate it into the DAQ system to control the test system and acquire data synchronously and high-efficaciously. (authors)

  11. FELIX: the new detector readout system for the ATLAS experiment

    CERN Document Server

    Zhang, Jinlong; The ATLAS collaboration

    2017-01-01

    After the Phase-I upgrade and onward, the Front-End Link eXchange (FELIX) system will be the interface between the data handling system and the detector front-end electronics and trigger electronics at the ATLAS experiment. FELIX will function as a router between custom serial links and a commodity switch network which will use standard technologies to communicate with data collecting and processing components. The FELIX system is being developed by using commercial-off-the-shelf server PC technology in combination with a FPGA-based PCIe Gen3 I/O card interfacing to GigaBit Transceiver links and with Timing, Trigger and Control connectivity provided by an FMC-based mezzanine card. Dedicated firmware for the Xilinx FPGA (Virtex 7 and Kintex UltraScale) installed on the I/O card alongside an interrupt-driven Linux kernel driver and user-space software will provide the required functionality. On the network side, the FELIX unit connects to both Ethernet-based network and Infiniband. The system architecture of FE...

  12. Further development of the ZEUS Expert System: Computer science foundations of design

    International Nuclear Information System (INIS)

    Flasinski, M.

    1994-03-01

    The prototype version of the ZEUS Expert System, ZEXP, was diagnosing selected aspects of DAQ System during ZEUS running in 1993. In November 1993 ZEUS decided to extend its scope in order to cover all crucial aspects of operating the ZEUS detector (Run Control, Slow Control, Data Acquisition performance and Data Quality Monitoring). The paper summarizes fundamental assumptions concerning the design of the final version of the ZEUS Expert System, ZEX. Although the theoretical background material relates primarily to ZEX, its elements can be used for constructing other expert systems for HEP experiments. (orig.)

  13. Test and improvement of readout system based on APV25 chip for GEM detector

    International Nuclear Information System (INIS)

    Hu Shouyang; Jian Siyu; Zhou Jing; Shan Chao; Li Xinglong; Li Xia; Li Xiaomei; Zhou Yi

    2014-01-01

    Gas electron multiplier (GEM) is the most promising position sensitive gas detector. The new generation of readout electronics system includes APV25 front-end card, multi-purpose digitizer (MPD), VME controller and Linux-based acquisition software DAQ. The construction and preliminary test of this readout system were finished, and the ideal data with the system working frequency of 40 MHz and 20 MHz were obtained. The long time running test shows that the system has a very good time-stable ability. Through optimizing the software configuration and improving hardware quality, the noise level was reduced, and the signal noise ratio was improved. (authors)

  14. The LHCb trigger and data acquisition system

    CERN Document Server

    Dufey, J P; Harris, F; Harvey, J; Jost, B; Mato, P; Müller, E

    2000-01-01

    The LHCb experiment is the most recently approved of the 4 experiments under construction at CERNs LHC accelerator. It is a special purpose experiment designed to precisely measure the CP violation parameters in the B-B system. Triggering poses special problems since the interesting events containing B-mesons are immersed in a large background of inelastic p-p reactions. We therefore decided to implement a 4 level triggering scheme. The LHCb Data Acquisition (DAQ) system will have to cope with an average trigger rate of ~40 kHz, after two levels of hardware triggers, and an average event size of ~100 kB. Thus an event-building network which can sustain an average bandwidth of 4 GB/s is required. A powerful software trigger farm will have to be installed to reduce the rate from the 40 kHz to ~100 Hz of events written to permanent storage. In this paper we outline the general architecture of the Trigger and DAQ system and the readout protocols we plan to implement. First results of simulations of the behavior o...

  15. Three-tiered integration of PACS and HIS toward next generation total hospital information system.

    Science.gov (United States)

    Kim, J H; Lee, D H; Choi, J W; Cho, H I; Kang, H S; Yeon, K M; Han, M C

    1998-01-01

    The Seoul National University Hospital (SNUH) started a project to innovate the hospital information facilities. This project includes installation of high speed hospital network, development of new HIS, OCS (order communication system), RIS and PACS. This project aims at the implementation of the first total hospital information system by seamlessly integrating these systems together. To achieve this goal, we took three-tiered systems integration approach: network level, database level, and workstation level integration. There are 3 loops of networks in SNUH: proprietary star network for host computer based HIS, Ethernet based hospital LAN for OCS and RIS, and ATM based network for PACS. They are linked together at the backbone level to allow high speed communication between these systems. We have developed special communication modules for each system that allow data interchange between different databases and computer platforms. We have also developed an integrated workstation in which both the OCS and PACS application programs run on a single computer in an integrated manner allowing the clinical users to access and display radiological images as well as textual clinical information within a single user environment. A study is in progress toward a total hospital information system in SNUH by seamlessly integrating the main hospital information resources such as HIS, OCS, and PACS. With the three-tiered systems integration approach, we could successfully integrate the systems from the network level to the user application level.

  16. Contributions to the back-end software sub-system of the ATLAS data acquisition of event filter prototype -1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    1998-01-01

    A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition (DAQ) and Event Filter (EF) prototype, based on the functional architecture described in the ATLAS Technical Proposal. The prototype consists of a full 'vertical' slice of the ATLAS Data Acquisition and Event Filter architecture and can be seen as made of 4 sub-systems: the Detector Interface, the Dataflow, the Back-end DAQ and the Event Filter. The Bucharest group is member of DAQ/EF collaboration and during 1997 was involved in the Back-end activities. The back-end software encompasses the software for configuring, controlling and monitoring the DAQ but specifically excludes the management, processing or transportation of physics data. The user requirements gathered for the back-end sub-system have been divided into groups related to activities providing similar functionality. The groups have been further developed into components of the Back-end with a well defined purpose and boundaries. Each component offers some unique functionality and has its own architecture. The actual Back-end component model includes 5 core components (run control, configuration databases, message reporting system, process manager and information service) and 6 detector integration components (partition and resource manager, status display, run bookkeeper, event dump, test manager and diagnostic package). The Bucharest group participated to the high level design, implementation and testing of three components (information service, message reporting system and status display). The Information Service (IS) provides an information exchange facility for software components of the DAQ. Information (defined by the supplier) from many sources can be categorized and made available to requesting applications asynchronously or on demand. The design of the information service followed an object oriented approach. It is a multiple server configuration in which servers are dedicated to

  17. Clear-PEM: A PET imaging system dedicated to breast cancer diagnostics

    CERN Document Server

    Abreu, M C; Albuquerque, E; Almeida, F G; Almeida, P; Amaral, P; Auffray, Etiennette; Bento, P; Bruyndonckx, P; Bugalho, R; Carriço, B; Cordeiro, H; Ferreira, M; Ferreira, N C; Gonçalves, F; Lecoq, Paul; Leong, C; Lopes, F; Lousã, P; Luyten, J; Martins, M V; Matela, N; Rato-Mendes, P; Moura, R; Nobre, J; Oliveira, N; Ortigão, C; Peralta, L; Rego, J; Ribeiro, R; Rodrigues, P; Santos, A I; Silva, J C; Silva, M M; Tavernier, Stefaan; Teixeira, I C; Texeira, J P; Trindade, A; Trummer, Julia; Varela, J

    2007-01-01

    The Clear-PEM scanner for positron emission mammography under development is described. The detector is based on pixelized LYSO crystals optically coupled to avalanche photodiodes and readout by a fast low-noise electronic system. A dedicated digital trigger (TGR) and data acquisition (DAQ) system is used for on-line selection of coincidence events with high efficiency, large bandwidth and small dead-time. A specialized gantry allows to perform exams of the breast and of the axilla. In this paper we present results of the measurement of detector modules that integrate the system under construction as well as the imaging performance estimated from Monte Carlo simulated data.

  18. Jefferson Lab Data Acquisition Run Control System

    International Nuclear Information System (INIS)

    Vardan Gyurjyan; Carl Timmer; David Abbott; William Heyes; Edward Jastrzembski; David Lawrence; Elliott Wolin

    2004-01-01

    A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control systems. The main, unique feature which sets this system apart from conventional systems is its incorporation of intelligent agent concepts. Intelligent agents are autonomous programs which interact with each other through certain protocols on a peer-to-peer level. In this case, the protocols and standards used come from the domain-independent Foundation for Intelligent Physical Agents (FIPA), and the implementation used is the Java Agent Development Framework (JADE). A lightweight, XML/RDF-based language was developed to standardize the description of the run control system for configuration purposes

  19. Design and Implementation of Electric Steering Gear Inspection System for Unmanned Aerial Vehicles Based on Virtual Instruments

    Directory of Open Access Journals (Sweden)

    Zheng Xing

    2016-01-01

    Full Text Available A kind of UAV electric servo detection system based on Virtual Instrument is designed in this paper, including the hardware platform based on PC-DAQ virtual instrument architecture and the software platform based on LabVIEW function, structure and system implementation methods. The function, structure and system implementation method of software platform is also described. The gear limits checking, zero testing, time domain characteristics test results showed that the system achieves testing requirements well, and can complete detection of electric steering gear automatically, fast, easy and accurate.

  20. Tests of the data acquisition system and detector control system for the muon chambers of the CMS experiment at the LHC

    International Nuclear Information System (INIS)

    Sowa, Michael Christian

    2009-01-01

    The Phys. Inst. III A of RWTH Aachen University is involved in the development, production and tests of the Drift Tube (DT) muon chambers for the barrel muon system of the CMS detector at the LHC at CERN (Geneva). The present thesis describes some test procedures which were developed and performed for the chamber local Data Acquisition (DAQ) system, as well as for parts of the Detector Control System (DCS). The test results were analyzed and discussed. Two main kinds of DAQ tests were done. On the one hand, to compare two different DAQ systems, the chamber signals were split and read out by both systems. This method allowed to validate them by demonstrating, that there were no relevant differences in the measured drift times, generated by the same muon event in the same chamber cells. On the other hand, after the systems were validated, the quality of the data was checked. For this purpose extensive noise studies were performed. The noise dependence on various parameters (threshold,HV) was investigated quantitatively. Also detailed studies on single cells, qualified as ''dead'' and ''noisy'' were done. For the DAQ tests a flexible hardware and software environment was needed. The organization and installation of the supplied electronics, as well as the software development was realized within the scope of this thesis. The DCS tests were focused on the local gas pressure read-out components, attached directly to the chamber: pressure sensor, manifolds and the pressure ADC (PADC). At first it was crucial to proof, that the calibration of the mentioned chamber components for the gas pressure measurement is valid. The sensor calibration data were checked and possible differences in their response to the same pressure were studied. The analysis of the results indicated that the sensor output depends also on the ambient temperature, a new experience which implied an additional pedestal measurement of the chamber gas pressure sensors at CMS. The second test sequence

  1. Preliminary report on a breathing coaching and assessment system for use by patients at home

    International Nuclear Information System (INIS)

    Fox, C.D.; Kron, T.; Winton, J.R.S.; Rothwell, R.

    2010-01-01

    Full text: Respiratory-gated radiotherapy requires consistent breathing. Therefore, we developed a system that will assess breathing consistency and allow patients to train themselves at home. Real-time feedback is to be provided visually to patients against a reference breathing track derived from their own breathing pattern. The system would need to generate the reference track and to use this reference track for coaching. The system should be simple, robust and affordable, without complex setup. Results The system uses a net book with a USB connected data acquisition module (DAQ). The patient's breathing is sampled by the DAQ, measuring intra-nasal pressure through nasal prongs. Software was written in collaboration with the Victorian eResearch Strategic Initiative (YERSi). The system is used to collect a patient reference breathing track. This track is processed to generate a 'golden breathing cycle' (GBC), normalised in both amplitude and duration, containing the shape of the breathing cycle. After training, the patient takes the system home for a number of sessions of coaching and assessment. In coaching mode the patient is asked to maintain a graphic representation of their current state of breathing in close correlation to the golden breathing cycle as it moves across the screen. Displayed GBC amplitude and duration respond dynamically to the patient's breathing rhythm. Statistics are collected measuring the patient's ability to conform to the GBC and may be used to decide suitability for gated therapy. Conclusion The DAQ hardware is completed, and software is approaching completion. Sample data has been collected from volunteers.

  2. Design of Data Acquisition and Control System for Indian Test Facility of Diagnostics Neutral Beam

    International Nuclear Information System (INIS)

    Soni, Jignesh; Tyagi, Himanshu; Yadav, Ratnakar; Rotti, Chandramouli; Bandyopadhyay, Mainak; Bansal, Gourab; Gahluat, Agrajit; Sudhir, Dass; Joshi, Jaydeep; Prasad, Rambilas; Pandya, Kaushal; Shah, Sejal; Parmar, Deepak; Chakraborty, Arun

    2015-01-01

    Highlights: • More than 900 channels Data Acquisition and Control System. • INTF DACS has been designed based on ITER-PCDH guidelines. • Separate Interlock and Safety system designed based on IEC 61508 standard. • Hardware selected from ITER slow controller and fast controller catalog. • Software framework based on ITER CODAC Core System and LabVIEW software. - Abstract: The Indian Test Facility (INTF) – a negative hydrogen ion based 100 kV, 60 A, 5 Hz modulated NBI system having 3 s ON/20 s OFF duty cycle. Prime objective of the facility is to install a full-scale test bed for the qualification of all Diagnostic Neutral Beam (DNB) parameters, prior to installation in ITER. The automated and safe operation of the INTF will require a reliable and rugged instrumentation and control system which provide control, data acquisition (DAQ), interlock and safety functions, referred as INTF-DACS. The INTF-DACS has been decided to be design based on the ITER CODAC architecture and ITER-PCDH guidelines since the technical understanding of CODAC technology gained from this will later be helpful in development of plant system I&C for DNB. For complete operation of the INTF, approximately 900 numbers of signals are required to be superintending by the DACS. In INTF conventional control loop time required is within the range of 5–100 ms and for DAQ except high-end diagnostics, required sampling rates in range of 5 sample per second (Sps) to 10 kSps; to fulfill these requirements hardware components have been selected from the ITER slow and fast controller catalogs. For high-end diagnostics required sampling rates up to 100 MSps normally in case of certain events, therefore event and burst based DAQ hardware has been finalized. Combined use of CODAC core software (CCS) and NI-LabVIEW has been finalized due to the fact that full required DAQ support is not available in present version of CCS. Interlock system for investment protection of facility and Safety system for

  3. Design and development of a data acquisition system for photovoltaic modules characterization

    Energy Technology Data Exchange (ETDEWEB)

    Belmili, Hocine [Unite de Developpement des Equipements Solaires (UDES), Route Nationale No11, Bou-Isamil BP 365, Tipaza 42415, Algerie; Ait Cheikh, Salah Med; Haddadi, Mourad; Larbes, Cherif [Ecole Nationale Polytechnique, Laboratoire de Dispositifs de Communication et de Conversion Photovoltaique (LDCCP), 10 Avenue Hassen Badi, El Harrach 16200 Alger (Algeria)

    2010-07-15

    Testing photovoltaic generators performance is complicated. This is due to the influence of a variety of interactive parameters related to the environment such as solar irradiation and temperature in addition to solar cell material (mono-crystalline, poly-crystalline, amorphous and thin films). This paper presents a computer-based instrumentation system for the characterization of the photovoltaic (PV) conversion. It based on a design of a data acquisition system (DAQS) allowing the acquisition and the drawing of the characterization measure of PV modules in real meteorological test conditions. (author)

  4. Design of Data Acquisition and Control System for Indian Test Facility of Diagnostics Neutral Beam

    Energy Technology Data Exchange (ETDEWEB)

    Soni, Jignesh, E-mail: jsoni@ipr.res.in [Institute for Plasma Research, Bhat, Gandhinagar 382 428, Gujarat (India); Tyagi, Himanshu; Yadav, Ratnakar; Rotti, Chandramouli; Bandyopadhyay, Mainak [ITER-India, Institute for Plasma Research, Gandhinagar 380 025, Gujarat (India); Bansal, Gourab; Gahluat, Agrajit [Institute for Plasma Research, Bhat, Gandhinagar 382 428, Gujarat (India); Sudhir, Dass; Joshi, Jaydeep; Prasad, Rambilas [ITER-India, Institute for Plasma Research, Gandhinagar 380 025, Gujarat (India); Pandya, Kaushal [Institute for Plasma Research, Bhat, Gandhinagar 382 428, Gujarat (India); Shah, Sejal; Parmar, Deepak [ITER-India, Institute for Plasma Research, Gandhinagar 380 025, Gujarat (India); Chakraborty, Arun [Institute for Plasma Research, Bhat, Gandhinagar 382 428, Gujarat (India)

    2015-10-15

    Highlights: • More than 900 channels Data Acquisition and Control System. • INTF DACS has been designed based on ITER-PCDH guidelines. • Separate Interlock and Safety system designed based on IEC 61508 standard. • Hardware selected from ITER slow controller and fast controller catalog. • Software framework based on ITER CODAC Core System and LabVIEW software. - Abstract: The Indian Test Facility (INTF) – a negative hydrogen ion based 100 kV, 60 A, 5 Hz modulated NBI system having 3 s ON/20 s OFF duty cycle. Prime objective of the facility is to install a full-scale test bed for the qualification of all Diagnostic Neutral Beam (DNB) parameters, prior to installation in ITER. The automated and safe operation of the INTF will require a reliable and rugged instrumentation and control system which provide control, data acquisition (DAQ), interlock and safety functions, referred as INTF-DACS. The INTF-DACS has been decided to be design based on the ITER CODAC architecture and ITER-PCDH guidelines since the technical understanding of CODAC technology gained from this will later be helpful in development of plant system I&C for DNB. For complete operation of the INTF, approximately 900 numbers of signals are required to be superintending by the DACS. In INTF conventional control loop time required is within the range of 5–100 ms and for DAQ except high-end diagnostics, required sampling rates in range of 5 sample per second (Sps) to 10 kSps; to fulfill these requirements hardware components have been selected from the ITER slow and fast controller catalogs. For high-end diagnostics required sampling rates up to 100 MSps normally in case of certain events, therefore event and burst based DAQ hardware has been finalized. Combined use of CODAC core software (CCS) and NI-LabVIEW has been finalized due to the fact that full required DAQ support is not available in present version of CCS. Interlock system for investment protection of facility and Safety system for

  5. Development of the C-Band BPM System for ATF2

    CERN Document Server

    Lyapin, A; Wing, M; Shin, S; Honda, Y; Tauchi, T; Terunuma, N; Heo, A; Kim, E; Kim, K; Ainsworth, R C E; Boogert, S T; Boorman, G; Molloy, S; McCormick, D; Nelson, J; White, G; Ward, D

    2010-01-01

    The ATF2 international collaboration is intending to demonstrate nanometre beam sizes required for the future Linear Colliders. An essential part of the beam diagnostics needed to achieve that goal is the high resolution cavity beam position monitors (BPMs). In this paper we report on the C-band system consisting of 32 BPMs spread over the whole length of the new ATF2 extraction beamline. We discuss the design of the BPMs and electronics, main features of the DAQ system, and the first operational experience with these BPMs.

  6. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1990-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavorial aspects of the system as a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab. This paper describes the work undertaken at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  7. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1991-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavioral aspects of the system was a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing, DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  8. A wireless control system for the HTS-ECRIS, PKDELIS ion source for the HCI injector at IUAC

    International Nuclear Information System (INIS)

    Dutt, R.N.; Mathur, Y.; Lakshmy, P.S.; Rao, U.K.; Barua, P.; Rodrigues, G.; Kanjilal, D.

    2015-01-01

    An 18 Ghz high Tc superconducting ECR PKDELIS developed in collaboration with Pantechnic, France, has been installed on a high voltage (100 kV) deck in the beam hall III to provide charged particles for the HCI-DTL system. Control system for this source has been implemented using state of the art wireless interconnection system for electrical isolation of the control/coordination channel. An RS 485/MODBUS system for the local control bus has been used for its proven reliability and ruggedness and stable software/hardware support. The end control point is implemented with industrial grade and low cost but rugged and high voltage protected ADC, DAC and DIOs. Realtime parameter plotting and realtime automatic scanning of M/q vs magnetic field at a fast speed by sweeping the mass analyzer without operator interference are provided. Remote Ethernet based client server interface using a TCP/IP based compatible protocol interface for integration of the system with the complete facility of HCI + DTL + LINAC also has been incorporated. The system has proven to be rugged and reliable. The details of control system architecture, its topology, hardware and software interfaces and protocols and complete software structure will be provided in the paper. (author)

  9. A versatile scalable PET processing system

    International Nuclear Information System (INIS)

    Dong, H.; Weisenberger, A.; McKisson, J.; Wenze, Xi; Cuevas, C.; Wilson, J.; Zukerman, L.

    2011-01-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  10. A data acquisition and control system for high-speed gamma-ray tomography

    Science.gov (United States)

    Hjertaker, B. T.; Maad, R.; Schuster, E.; Almås, O. A.; Johansen, G. A.

    2008-09-01

    A data acquisition and control system (DACS) for high-speed gamma-ray tomography based on the USB (Universal Serial Bus) and Ethernet communication protocols has been designed and implemented. The high-speed gamma-ray tomograph comprises five 500 mCi 241Am gamma-ray sources, each at a principal energy of 59.5 keV, which corresponds to five detector modules, each consisting of 17 CdZnTe detectors. The DACS design is based on Microchip's PIC18F4550 and PIC18F4620 microcontrollers, which facilitates an USB 2.0 interface protocol and an Ethernet (IEEE 802.3) interface protocol, respectively. By implementing the USB- and Ethernet-based DACS, a sufficiently high data acquisition rate is obtained and no dedicated hardware installation is required for the data acquisition computer, assuming that it is already equipped with a standard USB and/or Ethernet port. The API (Application Programming Interface) for the DACS is founded on the National Instrument's LabVIEW® graphical development tool, which provides a simple and robust foundation for further application software developments for the tomograph. The data acquisition interval, i.e. the integration time, of the high-speed gamma-ray tomograph is user selectable and is a function of the statistical measurement accuracy required for the specific application. The bandwidth of the DACS is 85 kBytes s-1 for the USB communication protocol and 28 kBytes s-1 for the Ethernet protocol. When using the iterative least square technique reconstruction algorithm with a 1 ms integration time, the USB-based DACS provides an online image update rate of 38 Hz, i.e. 38 frames per second, whereas 31 Hz for the Ethernet-based DACS. The off-line image update rate (storage to disk) for the USB-based DACS is 278 Hz using a 1 ms integration time. Initial characterization of the high-speed gamma-ray tomograph using the DACS on polypropylene phantoms is presented in the paper.

  11. A data acquisition and control system for high-speed gamma-ray tomography

    International Nuclear Information System (INIS)

    Hjertaker, B T; Maad, R; Schuster, E; Almås, O A; Johansen, G A

    2008-01-01

    A data acquisition and control system (DACS) for high-speed gamma-ray tomography based on the USB (Universal Serial Bus) and Ethernet communication protocols has been designed and implemented. The high-speed gamma-ray tomograph comprises five 500 mCi 241 Am gamma-ray sources, each at a principal energy of 59.5 keV, which corresponds to five detector modules, each consisting of 17 CdZnTe detectors. The DACS design is based on Microchip's PIC18F4550 and PIC18F4620 microcontrollers, which facilitates an USB 2.0 interface protocol and an Ethernet (IEEE 802.3) interface protocol, respectively. By implementing the USB- and Ethernet-based DACS, a sufficiently high data acquisition rate is obtained and no dedicated hardware installation is required for the data acquisition computer, assuming that it is already equipped with a standard USB and/or Ethernet port. The API (Application Programming Interface) for the DACS is founded on the National Instrument's LabVIEW® graphical development tool, which provides a simple and robust foundation for further application software developments for the tomograph. The data acquisition interval, i.e. the integration time, of the high-speed gamma-ray tomograph is user selectable and is a function of the statistical measurement accuracy required for the specific application. The bandwidth of the DACS is 85 kBytes s −1 for the USB communication protocol and 28 kBytes s −1 for the Ethernet protocol. When using the iterative least square technique reconstruction algorithm with a 1 ms integration time, the USB-based DACS provides an online image update rate of 38 Hz, i.e. 38 frames per second, whereas 31 Hz for the Ethernet-based DACS. The off-line image update rate (storage to disk) for the USB-based DACS is 278 Hz using a 1 ms integration time. Initial characterization of the high-speed gamma-ray tomograph using the DACS on polypropylene phantoms is presented in the paper

  12. FELIX - the new detector readout system for the ATLAS experiment

    CERN Document Server

    AUTHOR|(SzGeCERN)754725; The ATLAS collaboration; Anderson, John Thomas; Borga, Andrea; Boterenbrood, Hendrik; Chen, Hucheng; Chen, Kai; Drake, Gary; Donszelmann, Mark; Francis, David; Gorini, Benedetto; Guest, Daniel; Lanni, Francesco; Lehmann Miotto, Giovanna; Levinson, Lorne; Roich, Alexander; Schreuder, Frans Philip; Schumacher, J\\"orn; Vandelli, Wainer; Vermeulen, Jos; Wu, Weihao; Zhang, Jinlong

    2016-01-01

    From the ATLAS Phase-I upgrade and onward, new or upgraded detectors and trigger systems will be interfaced to the data acquisition, detector control and timing (TTC) systems by the Front-End Link eXchange (FELIX). FELIX is the core of the new ATLAS Trigger/DAQ architecture. Functioning as a router between custom serial links and a commodity network, FELIX is implemented by server PCs with commodity network interfaces and PCIe cards with large FPGAs and many high speed serial fiber transceivers. By separating data transport from data manipulation, the latter can be done by software in commodity servers attached to the network. Replacing traditional point-to-point links between Front-end components and the DAQ system by a switched network, FELIX provides scaling, flexibility uniformity and upgradability. Different Front-end data types or different data sources can be routed to different network endpoints that handle that data type or source: e.g. event data, configuration, calibration, detector control, monito...

  13. The CMS High Level Trigger System: Experience and Future Development

    CERN Document Server

    Bauer, Gerry; Bowen, Matthew; Branson, James G; Bukowiec, Sebastian; Cittolin, Sergio; Coarasa, J A; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Flossdorf, Alexander; Gigi, Dominique; Glege, Frank; Gomez-Reino, R; Hartl, Christian; Hegeman, Jeroen; Holzner, André; Y L Hwong; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, R K; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Shpakov, Dennis; Simon, M; Spataru, A C; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  14. The LHCb front-end electronics and data acquisition system

    CERN Document Server

    Jost, B

    2000-01-01

    The LHCb experiment is the most recently approved of the four experiments under construction at CERN's LHC accelerator. It is a special purpose experiment designed to precisely measure the CP violation parameters in the B-B system and to study rare B-decays. Triggering poses special problems since the interesting events containing B-mesons are immersed in a large background of inelastic p-p reactions. We therefore decided to implement a four-level triggering scheme. The LHCb data acquisition (DAQ) system will have to cope with an average trigger rate of 40 kHz, after two levels of hardware triggers, and an average event size of 100 kB. Thus, an event-building network which can sustain an average bandwidth of 4 GB /s is required. A powerful software trigger farm will have to be installed to reduce the rate from 40 kHz to 100 Hz of events written for permanent storage. In this paper we will outline the general architectures of the front-end electronics and of the trigger and DAQ system and the readout protocols...

  15. System Interface for an Integrated Intelligent Safety System (ISS for Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Mahammad A. Hannan

    2010-01-01

    Full Text Available This paper deals with the interface-relevant activity of a vehicle integrated intelligent safety system (ISS that includes an airbag deployment decision system (ADDS and a tire pressure monitoring system (TPMS. A program is developed in LabWindows/CVI, using C for prototype implementation. The prototype is primarily concerned with the interconnection between hardware objects such as a load cell, web camera, accelerometer, TPM tire module and receiver module, DAQ card, CPU card and a touch screen. Several safety subsystems, including image processing, weight sensing and crash detection systems, are integrated, and their outputs are combined to yield intelligent decisions regarding airbag deployment. The integrated safety system also monitors tire pressure and temperature. Testing and experimentation with this ISS suggests that the system is unique, robust, intelligent, and appropriate for in-vehicle applications.

  16. Hardware/Software Data Acquisition System for Real Time Cell Temperature Monitoring in Air-Cooled Polymer Electrolyte Fuel Cells.

    Science.gov (United States)

    Segura, Francisca; Bartolucci, Veronica; Andújar, José Manuel

    2017-07-09

    This work presents a hardware/software data acquisition system developed for monitoring the temperature in real time of the cells in Air-Cooled Polymer Electrolyte Fuel Cells (AC-PEFC). These fuel cells are of great interest because they can carry out, in a single operation, the processes of oxidation and refrigeration. This allows reduction of weight, volume, cost and complexity of the control system in the AC-PEFC. In this type of PEFC (and in general in any PEFC), the reliable monitoring of temperature along the entire surface of the stack is fundamental, since a suitable temperature and a regular distribution thereof, are key for a better performance of the stack and a longer lifetime under the best operating conditions. The developed data acquisition (DAQ) system can perform non-intrusive temperature measurements of each individual cell of an AC-PEFC stack of any power (from watts to kilowatts). The stack power is related to the temperature gradient; i.e., a higher power corresponds to a higher stack surface, and consequently higher temperature difference between the coldest and the hottest point. The developed DAQ system has been implemented with the low-cost open-source platform Arduino, and it is completed with a modular virtual instrument that has been developed using NI LabVIEW. Temperature vs time evolution of all the cells of an AC-PEFC both together and individually can be registered and supervised. The paper explains comprehensively the developed DAQ system together with experimental results that demonstrate the suitability of the system.

  17. Hardware/Software Data Acquisition System for Real Time Cell Temperature Monitoring in Air-Cooled Polymer Electrolyte Fuel Cells

    Directory of Open Access Journals (Sweden)

    Francisca Segura

    2017-07-01

    Full Text Available This work presents a hardware/software data acquisition system developed for monitoring the temperature in real time of the cells in Air-Cooled Polymer Electrolyte Fuel Cells (AC-PEFC. These fuel cells are of great interest because they can carry out, in a single operation, the processes of oxidation and refrigeration. This allows reduction of weight, volume, cost and complexity of the control system in the AC-PEFC. In this type of PEFC (and in general in any PEFC, the reliable monitoring of temperature along the entire surface of the stack is fundamental, since a suitable temperature and a regular distribution thereof, are key for a better performance of the stack and a longer lifetime under the best operating conditions. The developed data acquisition (DAQ system can perform non-intrusive temperature measurements of each individual cell of an AC-PEFC stack of any power (from watts to kilowatts. The stack power is related to the temperature gradient; i.e., a higher power corresponds to a higher stack surface, and consequently higher temperature difference between the coldest and the hottest point. The developed DAQ system has been implemented with the low-cost open-source platform Arduino, and it is completed with a modular virtual instrument that has been developed using NI LabVIEW. Temperature vs time evolution of all the cells of an AC-PEFC both together and individually can be registered and supervised. The paper explains comprehensively the developed DAQ system together with experimental results that demonstrate the suitability of the system.

  18. 107th meeting of the working group electronic instrumentation in Spring 2016

    International Nuclear Information System (INIS)

    Goettlicher, Peter

    2016-06-01

    The following topics were dealt with: Instrumentation and the NUSTAR physics program in FAIR phase-0, actual projects of the group EE-digital electronics, ethernet-based data acquisition beyond 10 GBit/s, the big data problem in DAQ systems, a general front-end readout architecture in scientific detector systems, the KALYPSO detection system for single-shot electro-optical bunch measurements, MicroTCA based RF and laser cavity regulation including piezo controls, FPGA implementation for data acquisition system with gigabit serial link and PCIe interface, next generation MTCA.4 crate, increased PCIexpress band-width up to 128 Gb/s and optical PCIexpress cascading, power supplies for the sensitive measurement techniques and the complex automatization, error-save industry PC, the interfacing of NI products to EPICS, Green Cube, a survey of EPICS rate at GSI and FAIR, CS++ as actor-based successor of the CS framework, synchronized fast shutter control with adaptive phase shift compensation in the EtherCAT motion control system, precise voltage supply for the SiPM of a Cherenkov telescope in the Antarctis, development of a multichannel readout hardware for delay-line neutron detectors, silicon photonic data transmission for detector instrumentation, EMV consideration of the Maria instrument at the FRM2, quench detectors for FAIR, optimized illumination and in-situ calibration of high-gain antennas for the detection of extensive cosmic air showers. (HSI)

  19. 107th meeting of the working group electronic instrumentation in Spring 2016; 107. Tagung der Studiengruppe elektronische Instrumentierung im Fruehjahr 2016

    Energy Technology Data Exchange (ETDEWEB)

    Goettlicher, Peter (ed.)

    2016-06-15

    The following topics were dealt with: Instrumentation and the NUSTAR physics program in FAIR phase-0, actual projects of the group EE-digital electronics, ethernet-based data acquisition beyond 10 GBit/s, the big data problem in DAQ systems, a general front-end readout architecture in scientific detector systems, the KALYPSO detection system for single-shot electro-optical bunch measurements, MicroTCA based RF and laser cavity regulation including piezo controls, FPGA implementation for data acquisition system with gigabit serial link and PCIe interface, next generation MTCA.4 crate, increased PCIexpress band-width up to 128 Gb/s and optical PCIexpress cascading, power supplies for the sensitive measurement techniques and the complex automatization, error-save industry PC, the interfacing of NI products to EPICS, Green Cube, a survey of EPICS rate at GSI and FAIR, CS++ as actor-based successor of the CS framework, synchronized fast shutter control with adaptive phase shift compensation in the EtherCAT motion control system, precise voltage supply for the SiPM of a Cherenkov telescope in the Antarctis, development of a multichannel readout hardware for delay-line neutron detectors, silicon photonic data transmission for detector instrumentation, EMV consideration of the Maria instrument at the FRM2, quench detectors for FAIR, optimized illumination and in-situ calibration of high-gain antennas for the detection of extensive cosmic air showers. (HSI)

  20. Contributions to dataflow sub-system of the ATLAS data acquisition and event filter prototype-1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    1998-01-01

    A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition (DAQ) and Event Filter (EF) prototype. The prototype consists of a full 'vertical' slice of the ATLAS Data Acquisition and Event Filter architecture and can be seen as made of 4 sub-systems: the Detector Interface, the Dataflow, the Back-end DAQ and the Event Filter. The Bucharest group is member of DAQ/EF collaboration and during 1997 it was involved in the Dataflow activities. The Dataflow component of the ATLAS DAQ/EF prototype is responsible for moving the event data from the detector read-out links to the final mass storage. It also provides event data for monitoring purposes and implements local control for the various elements. The Dataflow system is designed to cover three main functions, namely: the collection and buffering of the data from the detector, the merging of fragments into full events and the interaction with event filter sub-farm. The event building function is covered by a Dataflow building block named Event Builder. All the other functions of the Dataflow system are covered by the two modular building blocks, the read-out crate (ROC) and the sub-farm DAQ (SFC). The Bucharest group was mainly involved in the activities related to the high level design, initial implementation and tests of the ROC supporting the read-out from one or more read-out drivers and having one or more connections to the event builder. The main data flow within the ROC is handled by three input/output modules named IOMs: the trigger module (TRG), the event builder interface module (EBIF) and the read-out buffer module (ROB). The TRG receives and buffers data control messages from level 1 and from level 2 trigger system, the EBIF builds fragments and makes them available to the event building sub-system and the ROB receives and buffers ROB fragments from the read-out link, S-LINK. In order to estimate the performance which could be achieved with the actual

  1. A Data Acquisition System for Medical Imaging

    International Nuclear Information System (INIS)

    Abellan, Carlos; Cachemiche, Jean-Pierre; Rethore, Frederic; Morel, Christian

    2013-06-01

    A data acquisition system for medical imaging applications is presented. Developed at CPPM, it provides high performance generic data acquisition and processing capabilities. The DAQ system is based on the PICMG xTCA standard and is composed of 1 up to 10 cards in a single rack, each one with 2 Altera Stratix IV FPGAs and a Fast Mezzanine Connector (FMC). Several mezzanines have been produced, each one with different functionalities. Some examples are: a mezzanine capable of receiving 36 optical fibres with up to 180 Gbps sustained data rates or a mezzanine with 12 x 5 Gbps input links, 12 x 5 Gbps output links and an SFP+ connector for control purposes. Several rack sizes are also available, thus making the system scalable from a one card desktop system useful for development purpose up to a full featured rack mounted DAQ for high end applications. Depending on the application, boards may exchange data at speeds of up to 25.6 Gbps bidirectional sustained rates in a double star topology through back-plane connections. Also, front panel optical fibres can be used when higher rates are required by the application. The system may be controlled by a standard Ethernet connection, thus providing easy integration with control computers and avoiding the need for drivers. Two control systems are foreseen. A Socket connection provides easy interaction with automation software regardless of the operating system used for the control PC. Moreover a web server may run on the Envision cards and provide an easy intuitive user interface. The system and its different components will be introduced. Some preliminary measurements with high speed signal links will be presented as well as the signal conditioning used to allow these rates. (authors)

  2. Jefferson Lab's Distributed Data Acquisition

    International Nuclear Information System (INIS)

    Trent Allison; Thomas Powers

    2006-01-01

    Jefferson Lab's Continuous Electron Beam Accelerator Facility (CEBAF) occasionally experiences fast intermittent beam instabilities that are difficult to isolate and result in downtime. The Distributed Data Acquisition (Dist DAQ) system is being developed to detect and quickly locate such instabilities. It will consist of multiple Ethernet based data acquisition chassis distributed throughout the seven-eights of a mile CEBAF site. Each chassis will monitor various control system signals that are only available locally and/or monitored by systems with small bandwidths that cannot identify fast transients. The chassis will collect data at rates up to 40 Msps in circular buffers that can be frozen and unrolled after an event trigger. These triggers will be derived from signals such as periodic timers or accelerator faults and be distributed via a custom fiber optic event trigger network. This triggering scheme will allow all the data acquisition chassis to be triggered simultaneously and provide a snapshot of relevant CEBAF control signals. The data will then be automatically analyzed for frequency content and transients to determine if and where instabilities exist

  3. The ALICE Silicon Pixel Detector Control and Calibration Systems

    CERN Document Server

    Calì, Ivan Amos; Manzari, Vito; Stefanini, Giorgio

    2008-01-01

    The work presented in this thesis was carried out in the Silicon Pixel Detector (SPD) group of the ALICE experiment at the Large Hadron Collider (LHC). The SPD is the innermost part (two cylindrical layers of silicon pixel detec- tors) of the ALICE Inner Tracking System (ITS). During the last three years I have been strongly involved in the SPD hardware and software development, construction and commissioning. This thesis is focused on the design, development and commissioning of the SPD Control and Calibration Systems. I started this project from scratch. After a prototyping phase now a stable version of the control and calibration systems is operative. These systems allowed the detector sectors and half-barrels test, integration and commissioning as well as the SPD commissioning in the experiment. The integration of the systems with the ALICE Experiment Control System (ECS), DAQ and Trigger system has been accomplished and the SPD participated in the experimental December 2007 commissioning run. The complex...

  4. Soft real-time alarm messages for ATLAS TDAQ

    CERN Document Server

    Darlea, G; Martin, B; Lehmann Miotto, G

    2010-01-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG—Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring "interesting" parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in th...

  5. Fourth Data Challenge for the ALICE data acquisition system

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    The ALICE experiment will study quark-gluon plasma using beams of heavy ions, such as those of lead. The particles in the beams will collide thousands of times per second in the detector and each collision will generate an event containing thousands of charged particles. Every second, the characteristics of tens of thousands of particles will have to be recorded. Thus, to be effective, the data acquisition system (DAQ) must meet extremely strict performance criteria. To this end, the ALICE Data Challenges entail step-by-step testing of the DAQ with existing equipment that is sufficiently close to the final equipment to provide a reliable indication of performance. During the fourth challenge, in 2002, a data acquisition rate of 1800 megabytes per second was achieved by using some thirty parallel-linked PCs running the specially developed DATE software. During the final week of tests in December 2002, the team also tested the Storage Tek linear magnetic tape drives. Their bandwidth is 30 megabytes per second a...

  6. Multichannel FPGA-Based Data-Acquisition-System for Time-Resolved Synchrotron Radiation Experiments

    Science.gov (United States)

    Choe, Hyeokmin; Gorfman, Semen; Heidbrink, Stefan; Pietsch, Ullrich; Vogt, Marco; Winter, Jens; Ziolkowski, Michael

    2017-06-01

    The aim of this contribution is to describe our recent development of a novel compact field-programmable gatearray (FPGA)-based data acquisition (DAQ) system for use with multichannel X-ray detectors at synchrotron radiation facilities. The system is designed for time resolved counting of single photons arriving from several-currently 12-independent detector channels simultaneously. Detector signals of at least 2.8 ns duration are latched by asynchronous logic and then synchronized with the system clock of 100 MHz. The incoming signals are subsequently sorted out into 10 000 time-bins where they are counted. This occurs according to the arrival time of photons with respect to the trigger signal. Repeatable mode of triggered operation is used to achieve high statistic of accumulated counts. The time-bin width is adjustable from 10 ns to 1 ms. In addition, a special mode of operation with 2 ns time resolution is provided for two detector channels. The system is implemented in a pocketsize FPGA-based hardware of 10 cm × 10 cm × 3 cm and thus can easily be transported between synchrotron radiation facilities. For setup of operation and data read-out, the hardware is connected via USB interface to a portable control computer. DAQ applications are provided in both LabVIEW and MATLAB environments.

  7. Process instrumentation and control for cryogenic system of VECC

    International Nuclear Information System (INIS)

    Pal, Sandip

    2017-01-01

    Superconducting Cyclotron, which comprises of superconducting main magnet and cryopanels operating at 4.3 K, are operational at VECC in three phases starting from 2005; finally without interruption from July, 2010 to November, 2016. Cryogenic loads of the Cyclotron are catered by any of the two helium liquefiers/refrigerators (250W and 415W @ 4.5K) and associated cryogen distribution system with extensive helium gas management system. The system also consists of 31 K liters of liquid Nitrogen (LN_2) storage and delivery system, necessary of radiation shield. EPICS (Experimental Physics and Industrial Control System) architecture is open source, flexible and has unlimited tags as compared to the commercial Supervisory control and data acquisition (SCADA) packages. Hence, it has been adopted to design the SCADA module. The EPICS Input Output Controller (IOC) communicates with four PLCs over Ethernet based control LAN to control/monitor 618 numbers of field Inputs/ Outputs (I/O). The control system is fully automated and does not require any human intervention for routine operation. Since these two liquefiers share the same high pressure (HP) and low pressure (LP) pipelines, any pressure fluctuation due to rapid change in flow sometimes causes trip of the liquefiers. Few modifications are made in the control scheme in HP and LP zones to avoid liquefier trip. The plant is running very reliably round the clock and the historical data of important parameters during plant operation are archived for plant maintenance, easy diagnosis and future modifications. Total pure helium cycle gas inventory is monitored through EPICS for early detection of helium loss from its trend

  8. Development of new data acquisition system for COMPASS experiment

    Science.gov (United States)

    Bodlak, M.; Frolov, V.; Jary, V.; Huber, S.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Virius, M.

    2016-04-01

    This paper presents development and recent status of the new data acquisiton system of the COMPASS experiment at CERN with up to 50 kHz trigger rate and 36 kB average event size during 10 second period with beam followed by approximately 40 second period without beam. In the original DAQ, the event building is performed by software deployed on switched computer network, moreover the data readout is based on deprecated PCI technology; the new system replaces the event building network with a custom FPGA-based hardware. The custom cards are introduced and advantages of the FPGA technology for DAQ related tasks are discussed. In this paper, we focus on the software part that is mainly responsible for control and monitoring. The most of the system can run as slow control; only readout process has realtime requirements. The design of the software is built on state machines that are implemented using the Qt framework; communication between remote nodes that form the software architecture is based on the DIM library and IPBus technology. Furthermore, PHP and JS languages are used to maintain system configuration; the MySQL database was selected as storage for both configuration of the system and system messages. The system has been design with maximum throughput of 1500 MB/s and large buffering ability used to spread load on readout computers over longer period of time. Great emphasis is put on data latency, data consistency, and even timing checks which are done at each stage of event assembly. System collects results of these checks which together with special data format allows the software to localize origin of problems in data transmission process. A prototype version of the system has already been developed and tested the new system fulfills all given requirements. It is expected that the full-scale version of the system will be finalized in June 2014 and deployed on September provided that tests with cosmic run succeed.

  9. On-chamber readout system for the ATLAS MDT Muon Spectrometer

    CERN Document Server

    Chapman, J; Ball, R; Brandenburg, G; Hazen, E; Oliver, J; Posch, C

    2004-01-01

    The ATLAS MDT Muon Spectrometer is a system of approximately 380,000 pressurized cylindrical drift tubes of 3 cm diameter and up to 6 meters in length. These Monitored Drift Tubes (MDTs) are precision- glued to form super-layers, which in turn are assembled into precision chambers of up to 432 tubes each. Each chamber is equipped with a set of mezzanine cards containing analog and digital readout circuitry sufficient to read out 24 MDTs per card. Up to 18 of these cards are connected to an on-chamber DAQ element referred to as a Chamber Service Module, or CSM. The CSM multiplexes data from the mezzanine cards and outputs this data on an optical fiber which is received by the off-chamber DAQ system. Thus, the chamber forms a highly self-contained unit with DC power in and a single optical fiber out. The Monitored Drift Tubes, due to their length, require a terminating resistor at their far end to prevent reflections. The readout system has been designed so that thermal noise from this resistor remains the domi...

  10. A VME-based readout system for the CMS Preshower sub-detector

    CERN Document Server

    Antchev, G; Bialas, W; Da Silva, J C; Kokkas, P; Manthos, N; Reynaud, S; Sidiropoulos, G; Snoeys, W; Vichoudis, P

    2007-01-01

    The CMS preshower is a fine grain detector that comprises 4288 silicon sensors, each containing 32 strips. The raw data are transferred from the detector to the counting room via 1208 optical fibres. Each fibre carries a 600-byte data packet per event. The maximum average level-1 trigger rate of 100 kHz results in a total data flow of ~72 GB/s from the preshower. For the readout of the preshower, 56 links to the CMS DAQ have been reserved, each having a bandwidth of 200 MB/s (2 kB/event). The total available downstream bandwidth of GB/s necessitates a reduction in the data volume by a factor of at least 7. A modular VME-based system is currently under development. The main objective of each VME board in this system is to acquire on-detector data from at least 22 optical links, perform on-line data reduction and pass the concentrated data to the CMS DAQ. The principle modules that the system is based on are being developed in collaboration with the TOTEM experiment.

  11. Performance of the NOνA Data Acquisition and Trigger Systems for the full 14 kT Far Detector

    International Nuclear Information System (INIS)

    Norman, A.; Ding, P.F.; Rebel, B.; Shanahan, P.; Davies, G.S.; Niner, E.; Dukes, E.C.; Frank, M.J.; Group, R.C.; Henderson, W.; Mina, R.; Oksuzian, Y.; Duyan, H.; Habig, A.; Moren, A.; Tomsen, K.; Mualem, L.; Sheshukov, A.; Tamsett, M.; Vinton, L.

    2015-01-01

    The NOvA experiment uses a continuous, free-running, dead-timeless data acquisition system to collect data from the 14 kT far detector. The DAQ system readouts the more than 344,000 detector channels and assembles the information into an raw unfiltered high bandwidth data stream. The NOvA trigger systems operate in parallel to the readout and asynchronously to the primary DAQ readout/event building chain. The data driven triggering systems for NOvA are unique in that they examine long contiguous time windows of the high resolution readout data and enable the detector to be sensitive to a wide range of physics interactions from those with fast, nanosecond scale signals up to processes with long delayed coincidences between hits which occur at the tens of milliseconds time scale. The trigger system is able to achieve a true 100% live time for the detector, making it sensitive to both beam spill related and off-spill physics. (paper)

  12. Experiences with ATM in a multivendor pilot system at Forschungszentrum Julich

    Science.gov (United States)

    Kleines, H.; Ziemons, K.; Zwoll, K.

    1998-08-01

    The ATM technology for high speed serial transmission provides a new quality of communication by introducing novel features in a LAN environment, especially support of real time communication, of both LAN and WAN communication and of multimedia streams. In order to evaluate ATM for future DAQ systems and remote control systems as well as for a high speed picture archiving and communications system for medical images, Forschungszentrum Julich has build up a pilot system for the evaluation of ATM and standard low cost multimedia systems. It is a heterogeneous multivendor system containing a variety of switches and desktop solutions, employing different protocol options of ATM. The tests conducted in the pilot system revealed major difficulties regarding stability, interoperability and performance. The paper presents motivations, layout and results of the pilot system. Discussion of results concentrates on performance issues relevant for realistic applications, e.g., connection to a RAID system via NFS over ATM.

  13. Automated Liquid-Level Control of a Nutrient Reservoir for a Hydroponic System

    Science.gov (United States)

    Smith, Boris; Asumadu, Johnson A.; Dogan, Numan S.

    1997-01-01

    A microprocessor-based system for control of the liquid level of a nutrient reservoir for a plant hydroponic growing system has been developed. The system uses an ultrasonic transducer to sense the liquid level or height. A National Instruments' Multifunction Analog and Digital Input/Output PC Kit includes NI-DAQ DOS/Windows driver software for an IBM 486 personal computer. A Labview Full Development system for Windows is the graphical programming system being used. The system allows liquid level control to within 0.1 cm for all levels tried between 8 and 36 cm in the hydroponic system application. The detailed algorithms have been developed and a fully automated microprocessor based nutrient replenishment system has been described for this hydroponic system.

  14. A novel portable fluorescence detection system for microfluidic card

    International Nuclear Information System (INIS)

    Shen, B; Xie, Y; Irawan, R

    2008-01-01

    Fluorescence based sensors are widely used in the field of biochemistry and medicine due to their high sensitivity and accuracy. But the cost and time required for each sample to be tested is high. If the diagnostic tools could be miniaturized, made simple to use and much less expensive, and readily available at the point of need such as emergency diagnosis, millions of people would be benefited from it. In this paper, we design a prototype of portable fluorescence detection system based on Fluorescence Filter Block and DAQ card which can emulate signal collection and processing functionalities. After the introduction of system structure and functional modules, we use a resolution approximation method to investigate the system performance. The evaluation shows that our prototype system has the sensitivity of 0.01 mMol/L (333.306 μg/mL) which meets most of the medical requirements.

  15. Software development for a switch-based data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Booth, A. (Superconducting Super Collider Lab., Dallas, TX (United States)); Black, D.; Walsh, D. (Fermi National Accelerator Lab., Batavia, IL (United States))

    1991-12-01

    We report on the software aspects of the development of a switch-based data acquisition system at Fermilab. This paper describes how, with the goal of providing an integrated systems engineering'' environment, several powerful software tools were put in place to facilitate extensive exploration of all aspects of the design. These tools include a simulation package, graphics package and an Expert System shell which have been integrated to provide an environment which encourages the close interaction of hardware and software engineers. This paper includes a description of the simulation, user interface, embedded software, remote procedure calls, and diagnostic software which together have enabled us to provide real-time control and monitoring of a working prototype switch-based data acquisition (DAQ) system.

  16. The Message Logging System for NOνA Experiment

    International Nuclear Information System (INIS)

    Lu Qiming; Kowalkowski, J B; Biery, K A

    2011-01-01

    The message logging system provides the infrastructure for all of the distributed processes in the data acquisition (DAQ) to report status messages of various severities in a consistent manner to a central location, as well as providing the tools for displaying and archiving the messages. The message logging system has been developed over a decade, and has been run successfully on CDF and CMS experiments. The most recent work to the message logging system is to build it as a stand-alone package with the name MessageFacility which works for any generic framework or applications, with NOνA as the first driving user. System designs and architectures, as well as the efforts of making it a generic library will be discussed. We also present new features that have been added.

  17. [The primary research and development of software oversampling mapping system for electrocardiogram].

    Science.gov (United States)

    Zhou, Yu; Ren, Jie

    2011-04-01

    We put forward a new concept of software oversampling mapping system for electrocardiogram (ECG) to assist the research of the ECG inverse problem to improve the generality of mapping system and the quality of mapping signals. We then developed a conceptual system based on the traditional ECG detecting circuit, Labview and DAQ card produced by National Instruments, and at the same time combined the newly-developed oversampling method into the system. The results indicated that the system could map ECG signals accurately and the quality of the signals was good. The improvement of hardware and enhancement of software made the system suitable for mapping in different situations. So the primary development of the software for oversampling mapping system was successful and further research and development can make the system a powerful tool for researching ECG inverse problem.

  18. PADS (Patient Archiving and Documentation System): a computerized patient record with educational aspects.

    Science.gov (United States)

    Hohnloser, J H; Pürner, F

    1992-01-01

    Rapid acquisition and analysis of information in an Intensive Care Unit (ICU) setting is essential, even more so the documentation of the decision making process which has vital consequences for the lives of ICU patients. We describe an Ethernet based local area network (LAN) with clinical workstations (Macintosh fx, ci). Our Patient Archiving and Documentation System (PADS) represents a computerized patient record presently used in a university hospitals' ICU. Taking full advantage of the Macintosh based graphical user interface (GUI) our system enables nurses and doctors to perform the following tasks: admission, medical history taking, physical examination, generation of problem lists and follow up notes, access to laboratory data and reports, semiautomatic generation of a discharge summary including full word processor capabilities. Furthermore, the system offers rapid, consistent and complete automatic encoding of diagnoses following the International Classification of Disease (ICD; WHO, [1]). For educational purposes the user can also view disease entities or complications related to the diagnoses she/he encoded. The system has links to other educational programs such as cardiac auscultation. A MEDLINE literature search through a CD-ROM based system can be performed without exiting the system; also, CD-ROM based medical textbooks can be accessed as well. Commercially available Macintosh programs can be integrated in the system without existing the main program thus enabling users to customize their working environment. Additional options include automatic background monitoring of users learning behavior, analyses and graphical display of numerous epidemiological and health care related problems. Furthermore, we are in the process of integrating sound and digital video in our system. This system represents one in a line of modular departmental models which will eventually be integrated to form a decentralized Hospital Information System (HIS).

  19. Recent experience and future evolution of the CMS High Level Trigger System

    CERN Document Server

    Bauer, Gerry; Branson, James; Bukowiec, Sebastian Czeslaw; Chaze, Olivier; Cittolin, Sergio; Coarasa Perez, Jose Antonio; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino Garrido, Robert; Hartl, Christian; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Nunez Barranco Fernandez, Carlos; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Spataru, Andrei Cristian; Stoeckli, Fabian; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first level trigger at a rate of 100 kHz. These events are read out by the Data Acquisition system (DAQ), assembled in memory in a farm of computers, and finally fed into the high-level trigger (HLT) software running on the farm. The HLT software selects interesting events for offline storage and analysis at a rate of a few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the 2010-2011 collider run is detailed, as well as the current architecture of the CMS HLT, and its integration with the CMS reconstruction framework and CMS DAQ. The short- and medium-term evolution of the HLT software infrastructure is discussed, with future improvements aimed at supporting extensions of the HLT computing power, and addressing remaining performance and maintenance issues.

  20. Preliminary assessment of a new data acquisition system for the microPET at IFUNAM

    Science.gov (United States)

    Murrieta-Rodríguez, Tirso; Alva-Sánchez, Héctor; Nava, Dante; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes

    2010-12-01

    In this work the new data acquisition system (DAQ) for the microPET of the SIBI project is presented. To increase the microPET sensitivity, the inclusion of more detection modules is required, which in turn needs a more sophisticated and compact signal processing system. The new DAQ is based on programmable integrated circuits (FPGAs) and is composed of (i) an 8-input triggering board with individual channel adjusting capabilities, which can process signals from 8 detector modules working in coincidence mode and (ii) two 10-channel digitising boards with 12-bit resolution. The digitised signals are transmitted to a PC through two Ethernet ports in each board. With the new boards the maximum singles counting rate is of the order of 350 kHz, with a dead time of 2.8 μs. Individual crystal maps of two detectors for image corrections have been obtained, with peak-to-valley ratios of 5:1. The new FPGA boards will allow the introduction of more detection modules with relatively simple electronics arrangement.

  1. Design and Implementation of the Control System of an Internal Combustion Engine Test Unit

    Directory of Open Access Journals (Sweden)

    Tufan Koç

    2014-02-01

    Full Text Available Accurate tests and performance analysis of engines are required to minimize measurement errors and so the use of the advanced test equipment is imperative. In other words, the reliable test results depend on the measurement of many parameters and recording the experimental data accurately which is depended on engine test unit. This study aims to design the control system of an internal combustion engine test unit. In the study, the performance parameters of an available internal combustion engine have been transferred to computer in real time. A data acquisition (DAQ card has been used to transfer the experimental data to the computer. Also, a user interface has been developed for performing the necessary procedures by using LabVIEW. The dynamometer load, the fuel consumption, and the desired speed can easily be adjusted precisely by using DAQ card and the user interface during the engine test. Load, fuel consumption, and temperature values (the engine inlet-outlet, exhaust inlet-outlet, oil, and environment can be seen on the interface and also these values can be recorded to the computer. It is expected that developed system will contribute both to the education of students and to the researchers’ studies and so it will eliminate a major lack.

  2. The control system of the multi-strip ionization chamber for the HIMM

    Energy Technology Data Exchange (ETDEWEB)

    Li, Min, E-mail: limin@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Yuan, Y.J. [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China); Mao, R.S., E-mail: Maorsh@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China); Xu, Z.G.; Li, Peng; Zhao, T.C.; Zhao, Z.L. [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China); Zhang, Nong [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China)

    2015-03-11

    Heavy Ion Medical Machine (HIMM) is a carbon ion cancer treatment facility which is being built by the Institute of Modern Physics (IMP) in China. In this facility, transverse profile and intensity of the beam at the treatment terminals will be measured by the multi-strip ionization chamber. In order to fulfill the requirement of the beam position feedback to accomplish the beam automatic commissioning, less than 1 ms reaction time of the Data Acquisition (DAQ) of this detector must be achieved. Therefore, the control system and software framework for DAQ have been redesigned and developed with National Instruments Compact Reconfigurable Input/Output (CompactRIO) instead of PXI 6133. The software is Labview-based and developed following the producer–consumer pattern with message mechanism and queue technology. The newly designed control system has been tested with carbon beam at the Heavy Ion Research Facility at Lanzhou-Cooler Storage Ring (HIRFL-CSR) and it has provided one single beam profile measurement in less than 1 ms with 1 mm beam position resolution. The fast reaction time and high precision data processing during the beam test have verified the usability and maintainability of the software framework. Furthermore, such software architecture is easy-fitting to applications with different detectors such as wire scanner detector.

  3. Performance Comparison of 112 Gb/s DMT, Nyquist PAM4 and Partial-Response PAM4 for Future 5G Ethernet-based Fronthaul Architecture

    DEFF Research Database (Denmark)

    Eiselt, Nicklas; Muench, Daniel; Dochhan, Annika

    2018-01-01

    (EML), a 25G driver and current state-of-the-art high speed 84 GS/s CMOS digital-to-analog converter (DAC) and analog-to-digital converter (ADC) test chips. Each modulation format is optimized independently for the desired scenario and their digital signal processing (DSP) requirements are investigated...

  4. GPUs for real-time processing in HEP trigger systems

    CERN Document Server

    Ammendola, R; Deri, L; Fiorini, M; Frezza, O; Lamanna, G; Lo Cicero, F; Lonardo, A; Messina, A; Sozzi, M; Pantaleo, F; Paolucci, Ps; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P

    2014-01-01

    We describe a pilot project (GAP - GPU Application Project) for the use of GPUs (Graphics processing units) for online triggering applications in High Energy Physics experiments. Two major trends can be identied in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a fully software data selection system (\\trigger-less"). The innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software not only in high level trigger levels but also in early trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several elds of science, although so far applications have been tailored to the specic strengths of such devices as accelerators in oine computation. With the steady reduction of GPU latencies, and the incre...

  5. New operator assistance features in the CMS Run Control System

    CERN Document Server

    Andre, Jean-Marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan F; Gigi, Dominique; Michail Gładki; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr; Vougioukas, M.

    2017-01-01

    The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN is a distributed Java web application running on Apache Tomcat servers. During Run-1 of the LHC, many operational procedures have been automated. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following t...

  6. Embedded systems design for high-speed data acquisition and control

    CERN Document Server

    Di Paolo Emilio, Maurizio

    2015-01-01

    This book serves as a practical guide for practicing engineers who need to design embedded systems for high-speed data acquisition and control systems. A minimum amount of theory is presented, along with a review of analog and digital electronics, followed by detailed explanations of essential topics in hardware design and software development. The discussion of hardware focuses on microcontroller design (ARM microcontrollers and FPGAs), techniques of embedded design, high speed data acquisition (DAQ) and control systems. Coverage of software development includes main programming techniques, culminating in the study of real-time operating systems. All concepts are introduced in a manner to be highly-accessible to practicing engineers and lead to the practical implementation of an embedded board that can be used in various industrial fields as a control system and high speed data acquisition system.   • Describes fundamentals of embedded systems design in an accessible manner; • Takes a problem-solving ...

  7. Upgrades of DARWIN, a dose and spectrum monitoring system applicable to various types of radiation over wide energy ranges

    Science.gov (United States)

    Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira; Shigyo, Nobuhiro; Watanabe, Fusao; Sakurai, Hiroki; Arai, Yoichi

    2011-05-01

    A dose and spectrum monitoring system applicable to neutrons, photons and muons over wide ranges of energy, designated as DARWIN, has been developed for radiological protection in high-energy accelerator facilities. DARWIN consists of a phoswitch-type scintillation detector, a data-acquisition (DAQ) module for digital waveform analysis, and a personal computer equipped with a graphical-user-interface (GUI) program for controlling the system. The system was recently upgraded by introducing an original DAQ module based on a field programmable gate array, FPGA, and also by adding a function for estimating neutron and photon spectra based on an unfolding technique without requiring any specific scientific background of the user. The performance of the upgraded DARWIN was examined in various radiation fields, including an operational field in J-PARC. The experiments revealed that the dose rates and spectra measured by the upgraded DARWIN are quite reasonable, even in radiation fields with peak structures in terms of both spectrum and time variation. These results clearly demonstrate the usefulness of DARWIN for improving radiation safety in high-energy accelerator facilities.

  8. Upgrades of DARWIN, a dose and spectrum monitoring system applicable to various types of radiation over wide energy ranges

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira; Shigyo, Nobuhiro; Watanabe, Fusao; Sakurai, Hiroki; Arai, Yoichi

    2011-01-01

    A dose and spectrum monitoring system applicable to neutrons, photons and muons over wide ranges of energy, designated as DARWIN, has been developed for radiological protection in high-energy accelerator facilities. DARWIN consists of a phoswitch-type scintillation detector, a data-acquisition (DAQ) module for digital waveform analysis, and a personal computer equipped with a graphical-user-interface (GUI) program for controlling the system. The system was recently upgraded by introducing an original DAQ module based on a field programmable gate array, FPGA, and also by adding a function for estimating neutron and photon spectra based on an unfolding technique without requiring any specific scientific background of the user. The performance of the upgraded DARWIN was examined in various radiation fields, including an operational field in J-PARC. The experiments revealed that the dose rates and spectra measured by the upgraded DARWIN are quite reasonable, even in radiation fields with peak structures in terms of both spectrum and time variation. These results clearly demonstrate the usefulness of DARWIN for improving radiation safety in high-energy accelerator facilities.

  9. New development of EPICS based data acquisition system for H-Alpha diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taegu, E-mail: glory@nfri.re.kr; Lee, Woongryol; Son, Souhun; Park, Jinseop

    2015-10-15

    Highlights: • The H-Alpha DAQ system was modified to measure the low current signal from the PMT. • We developed a new H-Alpha data acquisition system with a CPCI based digitizer. • We developed a signal conditioning box for converting the current to voltage. • The new signal condition box (SCB) has three input range level (400 nA, 1 μA and 2 μA). • It was successfully performed and stably operates more than the previous DAQ system. - Abstract: The H-Alpha diagnostic system has been developed to measure the line integrated intensity in the direction of toroidal and poloidal. The data acquisition (DAQ) system for H-Alpha diagnostics of the Korea Superconducting Tokamak Advanced Research (KSTAR) at the beginning of the first plasma in 2008 was developed with VME form factor digitizer in the Linux OS platform. The VME digitizer module of H-Alpha data acquisition system was modified to measure the low current signal from the photo-multiplier tubes (PMT). The input maximum current values of modified digitizer module are 400 nA and low current data is expressed as the value of the voltage between −10 V and +10 V. At first time, there was no problem to measure H-Alpha signal, but it could not measure the H-Alpha data signal as the KSTAR Plasma density increased. It exceeds digitizer input range, which means the H-Alpha signal is over 400 nA, so we should manually change the resistor on the digitizer board to measure the 400 nA over current. This is not easy to do and showed instability in the long time operation with high sampling data acquisition. In order to overcome these weak points, a new H-Alpha data acquisition system has been developed with a compact PCI (cPCI) based digitizer and a signal conditioning box for converting the current to voltage in the Linux OS platform. The new data acquisition system was developed based on Experimental Physics and Industrial Control System (EPICS) framework like other KSTAR diagnostics with standard framework (SFW

  10. A micro-TCA based data acquisition system for the Triple-GEM detectors for the upgrade of the CMS forward muon spectrometer

    CERN Document Server

    Lenzi, Thomas

    2016-01-01

    We will present the electronic and DAQ system being developed for TripleGEM detectors which will be installed in the CMS muon spectrometer. The microTCA system uses an Advanced Mezzanine Card equipped with an FPGA and the Versatile Link with the GBT chipset to link the front and back-end. On the detector an FPGA mezzanine board, the OptoHybrid, has to collect the data from the detector readout chips to transmit them optically to the microTCA boards using the GBT protocol. We will describe the hardware architecture, report on the status of the developments, and present results obtained with the system.In this contribution we will report on the progress of the design of the electronic readout and data acquisition (DAQ) system being developed for Triple-GEM detectors which will be installed in the forward region (1.5 < eta < 2.2) of the CMS muon spectrometer during the 2nd long shutdown of the LHC, planed for the period 2018-2019. The architecture of the Triple-GEM readout system is based on the use of the...

  11. A Read-out and Data Acquisition System for the Outputs of Multi-channel Spectroscopy Amplifiers

    International Nuclear Information System (INIS)

    Kong Jie; Qian Yi; Su Hong; Dong Chengfu

    2009-01-01

    A read-out and data acquisition system for the outputs of multi-channel spectroscopy amplifiers is introduced briefly in this paper. The 16-channel gating integrator/multiplexer developed by us and PXI-DAQ card are used to construct this system. A virtual instrument system for displaying, indicating,measuring and recording of output waveform is accomplished by integrating the PC, hardware, software together flexibly based on the Lab Windows/CVI platform in our read-out and data acquisition system. In this system, an ADC can face the 16 outputs of 16-channel spectroscopy amplifiers, which can improve the system integration and reduce the cost of data acquisition system. The design provided a new way for building the read-out and data acquisition system using the normal modules and spectroscopy amplifiers. This system has been tested and demonstrated that it is intelligent, reliable, real-time and low cost. (authors)

  12. Implementation of the data acquisition system for the Overlap Modular Track Finder in the CMS experiment

    CERN Document Server

    Zabolotny, Wojciech; Bunkowski, Karol; Byszuk, Adrian Pawel; Dobosz, Jakub; Doroba, Krzysztof; Pawel Drabik; Gorski, Maciej; Kalinowski, Artur; Kierzkowski, Krzysztof Zdzislaw; Konecki, Marcin Andrzej; Oklinski, Wojciech; Olszewski, Michal; Pozniak, Krzysztof Tadeusz; Zawistowski, Krystian

    2017-01-01

    The CMS experiment is currently undergoing the upgrade of its trigger, including the Level-1 muon trigger. In the barrel-endcap transition region the Overlap Muon Track Finder (OMTF) combines data from three types of detectors (RPC, DT, and CSC) to find the muon candidates.To monitor the operation of the OMTF, it is important to receive the data which were the basis for the trigger decision. This task must be performed by the Data Acquisition (OMTF DAQ) system.The new MTCA technology applied in the updated trigger allows implementation of the OMTF DAQ together with the OMTF trigger in the MTF7 board. Further concentration of data is performed by standard AMC13 boards.The proposed data concentration methodology assumes parallel filtering and queuing of data arriving from all input links (24 RPC, 30 CSC, and 6 DT). The data are waiting for the trigger decision in the input buffers. The triggered data are then converted into the intermediate 72-bit format and put into the sorter queues. The block responsible for...

  13. Development of an ADC Radiation Tolerance Characterization System for the Upgrade of the ATLAS LAr Calorimeter

    CERN Document Server

    INSPIRE-00445642; Chen, Kai; Kierstead, James; Lanni, Francesco; Takai, Helio; Jin, Ge

    2016-01-01

    ATLAS LAr calorimeter will perform its Phase-I upgrade during the long shut down (LS2) in 2018, a new LAr Trigger Digitizer Board (LTDB) will be designed and installed. Several commercial-off-the-shelf (COTS) multichannel high-speed ADCs have been selected as possible backups of the radiation tolerant ADC ASICs for LTDB. In order to evaluate the radiation tolerance of these back up commercial ADCs, we developed an ADC radiation tolerance characterization system, which includes the ADC boards, data acquisition (DAQ) board, signal generator, external power supplies and a host computer. The ADC board is custom designed for different ADCs, which has ADC driver and clock distribution circuits integrated on board. The Xilinx ZC706 FPGA development board is used as DAQ board. The data from ADC are routed to the FPGA through the FMC (FPGA Mezzanine Card) connector, de-serialized and monitored by the FPGA, and then transmitted to the host computer through the Gigabit Ethernet. A software program has been developed wit...

  14. Yarr: A PCIe based readout system for semiconductor tracking systems

    Energy Technology Data Exchange (ETDEWEB)

    Heim, Timon [Bergische Universitaet Wuppertal, Wuppertal (Germany); CERN, Geneva (Switzerland); Maettig, Peter [Bergische Universitaet Wuppertal, Wuppertal (Germany); Pernegger, Heinz [CERN, Geneva (Switzerland)

    2015-07-01

    The Yarr readout system is a novel DAQ concept, using an FPGA board connected via PCIe to a computer, to read out semiconductor tracking systems. The system uses the FPGA as a reconfigurable IO interface which, in conjunction with the very high speed of the PCIe bus, enables a focus of processing the data stream coming from the pixel detector in software. Modern computer system could potentially make the need of custom signal processing hardware in readout systems obsolete and the Yarr readout system showcases this for FE-I4 chips, which are state-of-the-art readout chips used in the ATLAS Pixel Insertable B-Layer and developed for tracking in high multiplicity environments. The underlying concept of the Yarr readout system tries to move intelligence from hardware into the software without the loss of performance, which is made possible by modern multi-core processors. The FPGA board firmware acts like a buffer and does no further processing of the data stream, enabling rapid integration of new hardware due to minimal firmware minimisation.

  15. Electronics and Calibration system for the CMS Beam Halo Monitor

    CERN Document Server

    Tosi, Nicolò; Fabbri, Franco L; Finkel, Alexey; Orfanelli, Stella; Loos, R; Montanari, Alessandro; Rusack, R; Stickland, David P

    2014-01-01

    In the context of increasing luminosity of LHC, it will be important to accurately measure the Machine Induced Background. A new monitoring system will be installed in the cavern of the Compact Muon Solenoid (CMS) experiment for measuring the beam background at high radius. This detector is composed of synthetic quartz Cherenkov radiators, coupled to fast photomultiplier tubes (PMT). The readout chain of this detector will make use of many components developed for the Phase 1 upgrade to the CMS Hadron Calorimeter electronics, with a dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal will be digitized by a charge integrating ASIC (QIE10), providing both the signal rise time and the charge integrated over one bunch crossing. The backend electronics will record bunch-by-bunch histograms, which will be published to CMS and the LHC using the newly designed CMS beam instrumentation specific DAQ. A calibration monitoring system has been designed to generate triggered pulses of...

  16. The fifth annual ALICE Industrial Awards ceremony on 9 March, 2007.

    CERN Multimedia

    2007-01-01

    The ALICE collaboration presents Quantum Corp with an award for the high performance cluster file system (StorNext) for the ALICE DAQ system, and for their outstanding cooperation in implementing the software.From left to right: Jurgen Schukraft (ALICE Spokesperson), Pierre vande Vyvre (ALICE DAQ), Hans Boggild (ALICE), Ewan Johnston (Quantum Corp.), Derek Barrilleaux (Quantum Corp.), Lance Hukill (Quantum Corp.), Ulrich Fuchs (ALICE DAQ), Catherine Decosse (ALICE) and Roberto Divia (ALICE DAQ).

  17. Design and implementation of data acquisition system for magnets of SST-1

    Energy Technology Data Exchange (ETDEWEB)

    Doshi, K., E-mail: pushpuk@ipr.res.in; Pradhan, S.; Masand, H.; Khristi, Y.; Dhongde, J.; Sharma, A.; Parghi, B.; Varmora, P.; Prasad, U.; Patel, D.

    2014-05-15

    The magnet system of the Steady-State Superconducting Tokamak-1 at the Institute for Plasma Research, Gandhinagar, India, consists of sixteen toroidal field and nine poloidal field. Superconducting coils together with a pair of resistive PF coils, an air core ohmic transformer and a pair of vertical field coils. These magnets are instrumented with various cryogenic compatible sensors and voltage taps for its monitoring, operation, protection, and control during different machine operational scenarios like cryogenic cool down, current charging cycles including ramp up, flat top, plasma breakdown, dumping/ramp down and warm up. The data acquisition system for these magnet instrumentation have stringent requirement regarding operational flexibility, reliability for continuous long term operation and data visualization during operations. A VME hardware based data acquisition system with ethernet based remote system architecture is implemented for data acquisition and control of the complete magnet operation. Software application is developed in three parts namely an embedded VME target, a network server and a remote client applications. A target board application implemented with real time operating system takes care of hardware configuration and raw data transmission to server application. A java server application manages several activities mainly multiple client communication over ethernet, database interface and data storage. A java based platform independent desktop client application is developed for online and offline data visualization, remote hard ware configuration and many other user interface tasks. The application has two modes of operation to cater to different needs of cool-down and charging operations. This paper describes application architecture, installation and commissioning and operational experience from the recent campaigns of SST-1.

  18. A micro-TCA based data acquisition system for the Triple-GEM detectors for the upgrade of the CMS forward muon spectrometer

    Science.gov (United States)

    Lenzi, T.

    2017-01-01

    The Gas Electron Multiplier (GEM) upgrade project aims at improving the performance of the muon spectrometer of the Compact Muon Solenoid (CMS) experiment which will suffer from the increase in luminosity of the Large Hadron Collider (LHC). The GEM collaboration proposes to instrument the first muon station with Triple-GEM detectors, a technology which has proven to be resistant to high fluxes of particles. The architecture of the readout system is based on the use of the microTCA standard hosting FPGA-based Advanced Mezzanine Card (AMC) and of the Versatile Link with the GBT chipset to link the on-detector electronics to the micro-TCA boards. For the front-end electronics a new ASIC, called VFAT3, is being developed. On the detector, a Xilinx Virtex-6 FPGA mezzanine board, called the OptoHybrid, has to collect the data from 24 VFAT3s and to transmit the data optically to the off-detector micro-TCA electronics, as well as to transmit the trigger data at 40 MHz to the CMS Cathode Strip Chamber (CSC) trigger. The microTCA electronics provides the interfaces from the detector (and front-end electronics) to the CMS DAQ, TTC (Timing, Trigger and Control) and Trigger systems. In this paper, we will describe the DAQ system of the Triple-GEM project and provide results from the latest test beam campaigns done at CERN.

  19. The ATLAS Trigger Core Configuration and Execution System in Light of the ATLAS Upgrade for LHC Run 2

    CERN Document Server

    Heinrich, Lukas; The ATLAS collaboration

    2015-01-01

    During the 2013/14 shutdown of the Large Hadron Collider (LHC) the ATLAS first level trigger (L1T) and the data acquisition system (DAQ) were substantially upgraded to cope with the increase in luminosity and collision multiplicity, expected to be delivered by the LHC in 2015. To name a few, the L1T was extended on the calorimeter side (L1Calo) to better cope with pile-up and apply better-tuned isolation criteria on electron, photon, and jet candidates. The central trigger (CT) was widened to analyze twice as many inputs, provide more trigger lines, and serve multiple sub-detectors in parallel during calibration periods. A new FPGA-based trigger, capable of analyzing event topologies at 40 MHz, was added to provide further input to forming the level 1 trigger decision (L1Topo). On the DAQ side the dataflow was completely remodeled, merging the two previously existing stages of the software-based high level trigger into one. Partially because of these changes, partially because of the new trigger paradigm to h...

  20. Development and testing of a system for meteorological and radiological data centralizing on Magurele zone

    International Nuclear Information System (INIS)

    Ciaus, M.; Niculescu, D.

    1997-01-01

    Within the framework of European collaboration co-ordinated by F.Z. Karlsruhe, the adapting, installing and developing of a decision support system for nuclear emergency management is now in progress in NIPNE. The decision support system RODOS will be available to the Romanian competent authorities - as well as to many western and eastern countries - in case of nuclear accidents. One main task in implementing the decision support system is to provide as input, among others, on-line real-time radiological and meteorological data, measured in national territory. For that purpose the regional and national measuring networks had to be connected to the central RODOS station in NIPNE. As the first step the sources of data existing on the Magurele zone were coupled to the RODOS system. The main sources of data are different meteorological sensors and flowmeters installed at three levels on the meteorological tower and a remote gamma-ray area monitor with several measuring locations in Magurele zone, connected by radio links to the central unit in Nuclear Instruments and Methods Department. For transmission of collected data to the central RODOS station, a local area network was implemented to connect all the computers. The Ethernet - based network uses optical fiber between buildings, coaxial and twisted cable inside buildings and suitable Hewlett-Packard hubs and transceivers. Several communication software packages based on TCP/IP protocols were installed on the computers and tested. The real-time data transfer between collecting computers and the central station will be carried out by automatic triggering of FTP programs at regular time intervals. The local network provides also a link to Internet, so that the indispensable exchange of data with similar RODOS centers in other countries, especially with the coordinating institute in Karlsruhe, as well as with other organizations is ensured. (authors)

  1. Implementation of BES-III TOF trigger system in programmable logic devices

    International Nuclear Information System (INIS)

    Zheng Wei; Liu Shubin; Liu Xuzong; An Qi

    2009-01-01

    The TOF trigger sub-system on the upgrading Beijing Spectrometer is designed to receive 368 bits fast hit signals from the front end electronics module to yield 7 bits trigger information according to the physical requirement. It sends the processed real time trigger information to the Global-Trigger-Logic to generate the primal trigger signal L1, and sends processed 136 bits real time position information to the Track-Match-Logic to calculate the particle flight tracks. The sub-system also packages the valid events for the DAQ system to read out. Following the reconfigurable concept, a large number of programmable logic devices are employed to increase the flexibility and reliability of the system, and decrease the complexity and the space requirement of PCB layout. This paper describes the implementation of the kernel trigger logic in a programmable logic device. (authors)

  2. Data acquisition system for the socal plane detector of the mass separator MASHA

    Science.gov (United States)

    Novoselov, A. S.; Rodin, A. M.; Motycak, S.; Podshibyakin, A. V.; Krupa, L.; Belozerov, A. V.; Vedeneyev, V. Yu.; Gulyaev, A. V.; Gulyaeva, A. V.; Kliman, J.; Salamatin, V. S.; Stepantsov, S. V.; Chernysheva, E. V.; Yukhimchuk, S. A.; Komarov, A. B.; Kamas, D.

    2016-09-01

    The results of the development and the general information about the data acquisition system which was recently created at the MASHA setup (Flerov laboratory of nuclear reactions at Joint institute for nuclear research) are presented. The main difference from the previous system is that we use a new modern platform, National Instruments PXI with XIA multichannel high-speed digitizers (250 MHz 12 bit 16 channels). At this moment system has 448 spectrometric channels. The software and its features for the data acquisition and analysis are also described. The new DAQ system expands precision measuring capabilities of alpha decays and spontaneous fission at the focal plane position-sensitive silicon strip detector which, in turn, increases the capabilities of the setup in such a field as low-yield registration of elements.

  3. Data acquisition system for the focal-plane detector of the mass separator MASHA

    International Nuclear Information System (INIS)

    Novoselov, A.S.; Rodin, A.M.; Podshibyakin, A.V.; Belozerov, A.V.; Vedeneyev, V.Yu.; Gulyaev, A.V.; Gulyaeva, A.V.; Salamatin, V.S.; Stepantsov, S.V.; Chernysheva, E.V.; Yukhimchuk, S.A.; Komarov, A.B.; Motycak, S.; Krupa, L.; Kliman, J.; Kamas, D.

    2016-01-01

    The results of the development and the general information about the data acquisition system which was recently created at the MASHA setup (Flerov Laboratory of Nuclear Reactions at the Joint Institute for Nuclear Research) are presented. The main difference from the previous system is that we use a new modern platform, National Instruments PXI with XIA multichannel high-speed digitizers (250 MHz 12 bit 16 channels). At this moment the system has 448 spectrometric channels. The software and its features for the data acquisition and analysis are also described. The new DAQ system expands precision measuring capabilities of alpha decays and spontaneous fission at the focal-plane position-sensitive silicon strip detector which, in turn, increases the capabilities of the setup in such a field as low-yield registration of elements.

  4. LabVIEW-based X-ray detection system for laser compton scattering experiment

    International Nuclear Information System (INIS)

    Luo Wen; Xu Wang; Pan Qiangyan

    2010-01-01

    A LabVIEW-based X-ray detection system has been developed for laser-Compton scattering (LCS) experiment at the 100 MeV Linac of the Shanghai Institute of Applied Physics (SINAP). It mainly consists of a Si (Li) detector, readout electronics and a LabVIEW-based Data Acquisition (DAQ), and possesses the functions of signal spectrum displaying, acquisition control and simple online data analysis and so on. The performance test shows that energy and time resolutions of the system are 184 eV at 5.9 keV and ≤ 1% respectively and system instability is found to be 0.3‰ within a week. As a result, this X-ray detection system has low-cost and high-performance features and can meet the requirements of LCS experiment. (authors)

  5. Peer-To-Peer Architectures in Distributed Data Management Systems for Large Hadron Collider Experiments

    CERN Document Server

    Lo Presti, Giuseppe; Lo Re, G; Orsini, L

    2005-01-01

    The main goal of the presented research is to investigate Peer-to-Peer architectures and to leverage distributed services to support networked autonomous systems. The research work focuses on development and demonstration of technologies suitable for providing autonomy and flexibility in the context of distributed network management and distributed data acquisition. A network management system enables the network administrator to monitor a computer network and properly handle any failure that can arise within the network. An online data acquisition (DAQ) system for high-energy physics experiments has to collect, combine, filter, and store for later analysis a huge amount of data, describing subatomic particles collision events. Both domains have tight constraints which are discussed and tackled in this work. New emerging paradigms have been investigated to design novel middleware architectures for such distributed systems, particularly the Active Networks paradigm and the Peer-to-Peer paradigm. A network man...

  6. Autonomous acquisition systems for TJ-II: controlling instrumentation with a fourth generation language

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.B.; Vega, J.; Agudo, J.M.; McCarthy, K.J.; Ruiz, M.; Barrera, E.; Lopez, S.

    2004-01-01

    Recently, 536 new acquisition channels, made-up of three different channel types, have been incorporated into the TJ-II data acquisition system (DAQ). Dedicated software has also been developed to permit experimentalists to program and control the data acquisition in these systems. The software has been developed using LabView and runs under the Windows 2000 operating system in both personal computer (PC) and PXI controllers. In addition, LabView software has been developed to control TJ-II VXI channels from a PC using a MXI connection. This new software environment will also aid future integration of acquisition channels into the TJ-II remote participation system. All of these acquisition devices work autonomously and are connected to the TJ-II central server via a local area network. In addition, they can be remotely controlled from the TJ-II control-room using Virtual Network Computing (VNC) software

  7. Intra and Inter-IOM Ccommunications Summary Document

    CERN Document Server

    Ambrosini, G; Cetin, S A; Conka, T; Fernandes, A; Francis, D; Joos, M; Lehmann, G; Mailov, A; Mapelli, L; Mornacchi, Giuseppe; Niculescu, M; Nurdan, K; Petersen, J; Spiwoks, R; Tremblet, L J; Ünel, G

    1999-01-01

    This document summarises the work performed, within the context of the DAQ-Unit of the DataFlow system in ATLAS DAQ/EF prototype -1 on intra and inter-Input/Output Module (IOM) message passing. This document fulfils the ATLAS DAQ/EF prototype -1 milestone of February 99.

  8. Production Performance of the ATLAS Semiconductor Tracker Readout System

    CERN Document Server

    Mitsou, V A

    2006-01-01

    The ATLAS Semiconductor Tracker (SCT) together with the pixel and the transition radiation detectors will form the tracking system of the ATLAS experiment at LHC. It will consist of 20000 single-sided silicon microstrip sensors assembled back-to-back into modules mounted on four concentric barrels and two end-cap detectors formed by nine disks each. The SCT module production and testing has finished while the macro-assembly is well under way. After an overview of the layout and the operating environment of the SCT, a description of the readout electronics design and operation requirements will be given. The quality control procedure and the DAQ software for assuring the electrical functionality of hybrids and modules will be discussed. The focus will be on the electrical performance results obtained during the assembly and testing of the end-cap SCT modules.

  9. The CMS tracker control system

    International Nuclear Information System (INIS)

    Dierlamm, A; Dirkes, G H; Fahrer, M; Frey, M; Hartmann, F; Masetti, L; Militaru, O; Shah, S Y; Stringer, R; Tsirou, A

    2008-01-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 10 4 power supply parameters, about 10 3 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 10 5 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention

  10. The CMS tracker control system

    Science.gov (United States)

    Dierlamm, A.; Dirkes, G. H.; Fahrer, M.; Frey, M.; Hartmann, F.; Masetti, L.; Militaru, O.; Shah, S. Y.; Stringer, R.; Tsirou, A.

    2008-07-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 104 power supply parameters, about 103 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 105 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention.

  11. The upgrade of the ATLAS High Level Trigger and Data Acquisition systems and their integration

    CERN Document Server

    Abreu, R; The ATLAS collaboration

    2014-01-01

    The Data Acquisition (DAQ) and High Level Trigger (HLT) systems that served the ATLAS experiment during LHC's first run are being upgraded in the first long LHC shutdown period, from 2013 to 2015. This contribution describes the elements that are vital for the new interaction between the two systems. The central architectural enhancement is the fusion of the once separate Level 2, Event Building (EB), and Event Filter steps. Through the factorization of previously disperse functionality and better exploitation of caching mechanisms, the inherent simplification carries with it an increase in performance. Flexibility to different running conditions is improved by an automatic balance of formerly separate tasks. Incremental EB is the principle of the new Data Collection, whereby the HLT farm avoids duplicate requests to the detector Read-Out System (ROS) by preserving and reusing previously obtained data. Moreover, requests are packed and fetched together to avoid redundant trips to the ROS. Anticipated EB is ac...

  12. Tests of the data acquisition system and detector control system for the muon chambers of the CMS experiment at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Sowa, Michael Christian

    2009-02-27

    The Phys. Inst. III A of RWTH Aachen University is involved in the development, production and tests of the Drift Tube (DT) muon chambers for the barrel muon system of the CMS detector at the LHC at CERN (Geneva). The present thesis describes some test procedures which were developed and performed for the chamber local Data Acquisition (DAQ) system, as well as for parts of the Detector Control System (DCS). The test results were analyzed and discussed. Two main kinds of DAQ tests were done. On the one hand, to compare two different DAQ systems, the chamber signals were split and read out by both systems. This method allowed to validate them by demonstrating, that there were no relevant differences in the measured drift times, generated by the same muon event in the same chamber cells. On the other hand, after the systems were validated, the quality of the data was checked. For this purpose extensive noise studies were performed. The noise dependence on various parameters (threshold,HV) was investigated quantitatively. Also detailed studies on single cells, qualified as ''dead'' and ''noisy'' were done. For the DAQ tests a flexible hardware and software environment was needed. The organization and installation of the supplied electronics, as well as the software development was realized within the scope of this thesis. The DCS tests were focused on the local gas pressure read-out components, attached directly to the chamber: pressure sensor, manifolds and the pressure ADC (PADC). At first it was crucial to proof, that the calibration of the mentioned chamber components for the gas pressure measurement is valid. The sensor calibration data were checked and possible differences in their response to the same pressure were studied. The analysis of the results indicated that the sensor output depends also on the ambient temperature, a new experience which implied an additional pedestal measurement of the chamber gas pressure

  13. Acquisition system of analysis and control data for the catalytic isotopic exchange module of the cryogenic pilot plant with mathematical modeling

    International Nuclear Information System (INIS)

    Retevoi, Carmen Maria; Cristescu, Ioana; Bornea, Anisia; Cristescu, Ion

    2000-01-01

    The main problem of the isotope exchange is the catalytic action of the reaction. In order to increase the economic efficiency it is suggested using the hydrophobic catalysts. The 'virtual instrument' which we design is made for monitoring the constant temperature of column, analysis and power supply commands for electrical heat exchangers. With the most popular signal conditioning product line, SCXI 1100 and DAQ hardware AT-MIO-16-XE-10 from National Instruments, we perform the multi-channel acquisition at DAQ boards rates. We chose signal conditioning owing to the following advantages: electrically isolation, transducer interfacing, signal amplification, filtering and high-speed channel multiplexing. The mathematical modeling allows us the equilibrium graphical representation of operating curve for system with equation H 2 O+HD -> HDO+H 2 . With Mc. Cobe-Thicle diagram there are determined the numbers of theoretical taller for different configurations of hydrophilic package / catalyst bed. Also, it is easy to monitor the operating parameter variation (L/G-liquid/gas, temperature, etc.) and feeding concentration on gaseous and liquid phase for separation performances. (authors)

  14. Development of Network Interface Cards for TRIDAQ systems with the NaNet framework

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Cicero, F. Lo; Lonardo, A.; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Simula, F.; Valente, P.; Vicini, P.; Lorenzo, S. Di; Piandani, R.; Pontisso, L.; Sozzi, M.; Fiorini, M.; Neri, I.; Lamanna, G.; Rossetti, D.

    2017-01-01

    NaNet is a framework for the development of FPGA-based PCI Express (PCIe) Network Interface Cards (NICs) with real-time data transport architecture that can be effectively employed in TRIDAQ systems. Key features of the architecture are the flexibility in the configuration of the number and kind of the I/O channels, the hardware offloading of the network protocol stack, the stream processing capability, and the zero-copy CPU and GPU Remote Direct Memory Access (RDMA). Three NIC designs have been developed with the NaNet framework: NaNet-1 and NaNet-10 for the CERN NA62 low level trigger and NaNet 3 for the KM3NeT-IT underwater neutrino telescope DAQ system. We will focus our description on the NaNet-10 design, as it is the most complete of the three in terms of capabilities and integrated IPs of the framework.

  15. The data acquisition system used in one-dimension multichannel fast electron energy loss spectrometer

    International Nuclear Information System (INIS)

    Jiang Weichun; Zhu Linfan; Zhang Yijun; Xu Kezuo

    2010-01-01

    It describes a data acquisition system used in one dimension multichannel fast electron energy loss spectrometer, which can work in scan acquisition mode and static acquisition mode. The timing precision of the scan mode is less than 4 μs by utilizing the gated signal generated by data acquisition card DAQ2010 and an AND logic circuit. A timer card PCI8554 is used to synchronize the data acquisition card and the personal computer. The scan voltage supply is controlled by the personal computer through the RS232 interface. The multithreading technology is used in the acquisition software in order to improve the accommodating-err ability of the acquisition system. A satisfactory test result is given. (authors)

  16. Proof of concept of an imaging system demonstrator for PET applications with SiPM

    International Nuclear Information System (INIS)

    Morrocchi, Matteo; Marcatili, Sara; Belcari, Nicola; Giuseppina Bisogni, Maria; Collazuol, Gianmaria; Ambrosi, Giovanni; Santoni, Cristiano; Corsi, Francesco; Foresta, Maurizio; Marzocca, Cristoforo; Matarrese, Gianvito; Sportelli, Giancarlo; Guerra, Pedro; Santos, Andres; Del Guerra, Alberto

    2013-01-01

    A PET imaging system demonstrator based on LYSO crystal arrays coupled to SiPM matrices is under construction at the University and INFN of Pisa. Two SiPM matrices, composed of 8×8 SiPM pixels, and 1,5 mm pitch, have been coupled one to one to a LYSO crystals array and read out by a custom electronics system. front-end ASICs were used to read 8 channels of each matrix. Data from each front-end were multiplexed and sent to a DAQ board for the digital conversion; a motherboard collects the data and communicates with a host computer through a USB port for the storage and off-line data processing. In this paper we show the first preliminary tomographic image of a point-like radioactive source acquired with part of the two detection heads in time coincidence

  17. Proof of concept of an imaging system demonstrator for PET applications with SiPM

    Energy Technology Data Exchange (ETDEWEB)

    Morrocchi, Matteo, E-mail: matteo.morrocchi@pi.infn.it [University of Pisa and INFN Sezione di Pisa, Pisa 56127 (Italy); Marcatili, Sara; Belcari, Nicola; Giuseppina Bisogni, Maria [University of Pisa and INFN Sezione di Pisa, Pisa 56127 (Italy); Collazuol, Gianmaria [INFN Sezione di Pisa, Pisa 56127 (Italy); Ambrosi, Giovanni; Santoni, Cristiano [INFN Sezione di Perugia, Perugia 06100 (Italy); Corsi, Francesco; Foresta, Maurizio; Marzocca, Cristoforo; Matarrese, Gianvito [Politecnico di Bari and INFN Sezione di Bari, Bari 70100 (Italy); Sportelli, Giancarlo [University of Pisa and INFN Sezione di Pisa, Pisa 56127 (Italy); Guerra, Pedro; Santos, Andres [Universidad Politecnica de Madrid, E 28040 Madrid (Spain); Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN) (Spain); Del Guerra, Alberto [University of Pisa and INFN Sezione di Pisa, Pisa 56127 (Italy)

    2013-08-21

    A PET imaging system demonstrator based on LYSO crystal arrays coupled to SiPM matrices is under construction at the University and INFN of Pisa. Two SiPM matrices, composed of 8×8 SiPM pixels, and 1,5 mm pitch, have been coupled one to one to a LYSO crystals array and read out by a custom electronics system. front-end ASICs were used to read 8 channels of each matrix. Data from each front-end were multiplexed and sent to a DAQ board for the digital conversion; a motherboard collects the data and communicates with a host computer through a USB port for the storage and off-line data processing. In this paper we show the first preliminary tomographic image of a point-like radioactive source acquired with part of the two detection heads in time coincidence.

  18. Combined Cycle Engine Large-Scale Inlet for Mode Transition Experiments: System Identification Rack Hardware Design

    Science.gov (United States)

    Thomas, Randy; Stueber, Thomas J.

    2013-01-01

    The System Identification (SysID) Rack is a real-time hardware-in-the-loop data acquisition (DAQ) and control instrument rack that was designed and built to support inlet testing in the NASA Glenn Research Center 10- by 10-Foot Supersonic Wind Tunnel. This instrument rack is used to support experiments on the Combined-Cycle Engine Large-Scale Inlet for Mode Transition Experiment (CCE? LIMX). The CCE?LIMX is a testbed for an integrated dual flow-path inlet configuration with the two flow paths in an over-and-under arrangement such that the high-speed flow path is located below the lowspeed flow path. The CCE?LIMX includes multiple actuators that are designed to redirect airflow from one flow path to the other; this action is referred to as "inlet mode transition." Multiple phases of experiments have been planned to support research that investigates inlet mode transition: inlet characterization (Phase-1) and system identification (Phase-2). The SysID Rack hardware design met the following requirements to support Phase-1 and Phase-2 experiments: safely and effectively move multiple actuators individually or synchronously; sample and save effector control and position sensor feedback signals; automate control of actuator positioning based on a mode transition schedule; sample and save pressure sensor signals; and perform DAQ and control processes operating at 2.5 KHz. This document describes the hardware components used to build the SysID Rack including their function, specifications, and system interface. Furthermore, provided in this document are a SysID Rack effectors signal list (signal flow); system identification experiment setup; illustrations indicating a typical SysID Rack experiment; and a SysID Rack performance overview for Phase-1 and Phase-2 experiments. The SysID Rack described in this document was a useful tool to meet the project objectives.

  19. A data acquisition system for measuring ionization cross section in laser multi-step resonant ionization experiment

    International Nuclear Information System (INIS)

    Qian Dongbin; Guo Yuhui; Zhang Dacheng; Chinese Academy of Sciences, Beijing; Ma Xinwen; Zhao Zhizheng; Wang Yanyu; Zu Kailing

    2006-01-01

    A CAMAC data acquisition system for measuring ionization cross section in laser multi-step resonant ionization experiment is described. The number of scalers in the front-end CAMAC can be adjusted by changing the data read-out table files. Both continuous and manual acquisition models are available, and there is a wide adjustable range from 1 ms to 800 s with the acquisition time unit. The long-term stability, Δt/t, for the data acquisition system with an acquisition time unit of 100 s was measured to be better than ±0.01%, thus validating its reliability in long-term online experimental data acquisition. The time response curves for three electrothermal power-meters were also measured by this DAQ system. (authors)

  20. Soft real-time alarm messages for ATLAS TDAQ

    Science.gov (United States)

    Darlea, G.; Al Shabibi, A.; Martin, B.; Lehmann Miotto, G.

    2010-05-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG—Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring "interesting" parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.

  1. Soft real-time alarm messages for ATLAS TDAQ

    International Nuclear Information System (INIS)

    Darlea, G.; Al Shabibi, A.; Martin, B.; Lehmann Miotto, G.

    2010-01-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG-Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring 'interesting' parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.

  2. Soft real-time alarm messages for ATLAS TDAQ

    Energy Technology Data Exchange (ETDEWEB)

    Darlea, G., E-mail: georgiana.lavinia.darlea@cern.c [CERN, Geneva (Switzerland); Al Shabibi, A.; Martin, B.; Lehmann Miotto, G. [CERN, Geneva (Switzerland)

    2010-05-21

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG-Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring 'interesting' parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.

  3. PXIe based data acquisition and control system for ECRH systems on SST-1 and Aditya tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Jatinkumar J., E-mail: jatin@ipr.res.in [Institute for Plasma Research, Bhat, Gandhinagar (India); Shukla, B.K.; Rajanbabu, N.; Patel, H.; Dhorajiya, P.; Purohit, D. [Institute for Plasma Research, Bhat, Gandhinagar (India); Mankadiya, K. [Optimized Solutions Pvt. Ltd (India)

    2016-11-15

    Highlights: • Data Aquisition and control system (DAQ). • PXIe hardware–(PXI–PCI bus extension for Instrumention Express). • RHVPS–Regulated High Voltage Power supply. • SST1–Steady state superconducting tokamak. - Abstract: In Steady State Superconducting (SST-1) tokamak, various RF heating sub-systems are used for plasma heating experiments. In SST-1, Two Electron Cyclotron Resonance Heating (ECRH) systems have been installed for pre-ionization, heating and current drive experiments. The 42 GHz gyrotron based ECRH system is installed and in operation with SST-1 plasma experiments. The 82.6 GHz gyrotron delivers 200 kW CW power (1000 s) while the 42 GHz gyrotron delivers 500 kW power for 500 ms duration. Each gyrotron system consists of various auxiliary power supplies, the crowbar unit and the water cooling system. The PXIe (PCI bus extension for Instrumentation Express)bus based DAC (Data Acquisition and Control) system has been designed, developed and under implementation for safe and reliable operation of the gyrotron. The Control and Monitoring Software applications have been developed using NI LabView 2014 software with real time support on windows platform.

  4. PXIe based data acquisition and control system for ECRH systems on SST-1 and Aditya tokamak

    International Nuclear Information System (INIS)

    Patel, Jatinkumar J.; Shukla, B.K.; Rajanbabu, N.; Patel, H.; Dhorajiya, P.; Purohit, D.; Mankadiya, K.

    2016-01-01

    Highlights: • Data Aquisition and control system (DAQ). • PXIe hardware–(PXI–PCI bus extension for Instrumention Express). • RHVPS–Regulated High Voltage Power supply. • SST1–Steady state superconducting tokamak. - Abstract: In Steady State Superconducting (SST-1) tokamak, various RF heating sub-systems are used for plasma heating experiments. In SST-1, Two Electron Cyclotron Resonance Heating (ECRH) systems have been installed for pre-ionization, heating and current drive experiments. The 42 GHz gyrotron based ECRH system is installed and in operation with SST-1 plasma experiments. The 82.6 GHz gyrotron delivers 200 kW CW power (1000 s) while the 42 GHz gyrotron delivers 500 kW power for 500 ms duration. Each gyrotron system consists of various auxiliary power supplies, the crowbar unit and the water cooling system. The PXIe (PCI bus extension for Instrumentation Express)bus based DAC (Data Acquisition and Control) system has been designed, developed and under implementation for safe and reliable operation of the gyrotron. The Control and Monitoring Software applications have been developed using NI LabView 2014 software with real time support on windows platform.

  5. Update of the Picker C9 irradiator control system of the gamma II room of the secondary laboratory of dosimetric calibration

    International Nuclear Information System (INIS)

    Simon S, L. E.

    2016-01-01

    The Picker C9 irradiator is responsible for the calibration of different radiological equipment and the control system that maintains it in operation is designed in the graphical programming software LabVIEW (Laboratory Virtual Instrumentation Engineering Workbench), being its major advantages: the different types of communication, easy interconnection with other software and the recognition of different hardware devices, among others. Operation of the irradiator control system is performed with the NI-Usb-6008 (DAQ) data acquisition module of the National Instruments Company. The purpose of this work is to update the routines that make the Picker C9 control system of the gamma II room of the secondary laboratory of dosimetric calibration, using the graphic programming software LabVIEW, as well as to configure the new acquisition hardware of data that is implemented to control the Picker C9 irradiator system and ensure its operation. (Author)

  6. Development of Labview based data acquisition and multichannel analyzer software for radioactive particle tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd; Abdullah, Nor Arymaswati; Mokhtar, Mukhlis B. [Technical Support Division, Malaysian Nuclear Agency, 43000, Kajang, Selangor (Malaysia); Abdullah, Jaafar B.; Hassan, Hearie B. [Industrial Technology Division, Malaysian Nuclear Agency, 43000, Kajang, Selangor (Malaysia)

    2015-04-29

    A DAQ (data acquisition) software called RPTv2.0 has been developed for Radioactive Particle Tracking System in Malaysian Nuclear Agency. RPTv2.0 that features scanning control GUI, data acquisition from 12-channel counter via RS-232 interface, and multichannel analyzer (MCA). This software is fully developed on National Instruments Labview 8.6 platform. Ludlum Model 4612 Counter is used to count the signals from the scintillation detectors while a host computer is used to send control parameters, acquire and display data, and compute results. Each detector channel consists of independent high voltage control, threshold or sensitivity value and window settings. The counter is configured with a host board and twelve slave boards. The host board collects the counts from each slave board and communicates with the computer via RS-232 data interface.

  7. Labview Based ECG Patient Monitoring System for Cardiovascular Patient Using SMTP Technology.

    Science.gov (United States)

    Singh, Om Prakash; Mekonnen, Dawit; Malarvili, M B

    2015-01-01

    This paper leads to developing a Labview based ECG patient monitoring system for cardiovascular patient using Simple Mail Transfer Protocol technology. The designed device has been divided into three parts. First part is ECG amplifier circuit, built using instrumentation amplifier (AD620) followed by signal conditioning circuit with the operation amplifier (lm741). Secondly, the DAQ card is used to convert the analog signal into digital form for the further process. Furthermore, the data has been processed in Labview where the digital filter techniques have been implemented to remove the noise from the acquired signal. After processing, the algorithm was developed to calculate the heart rate and to analyze the arrhythmia condition. Finally, SMTP technology has been added in our work to make device more communicative and much more cost-effective solution in telemedicine technology which has been key-problem to realize the telediagnosis and monitoring of ECG signals. The technology also can be easily implemented over already existing Internet.

  8. Development of slow control system for the Belle II ARICH counter

    Science.gov (United States)

    Yonenaga, M.; Adachi, I.; Dolenec, R.; Hataya, K.; Iori, S.; Iwata, S.; Kakuno, H.; Kataura, R.; Kawai, H.; Kindo, H.; Kobayashi, T.; Korpar, S.; Križan, P.; Kumita, T.; Mrvar, M.; Nishida, S.; Ogawa, K.; Ogawa, S.; Pestotnik, R.; Šantelj, L.; Sumiyoshi, T.; Tabata, M.; Yusa, Y.

    2017-12-01

    A slow control system (SCS) for the Aerogel Ring Imaging Cherenkov (ARICH) counter in the Belle II experiment was newly developed and coded in the development frameworks of the Belle II DAQ software. The ARICH is based on 420 Hybrid Avalanche Photo-Detectors (HAPDs). Each HAPD has 144 pixels to be readout and requires 6 power supply (PS) channels, therefore a total number of 2520 PS channels and 60,480 pixels have to be configured and controlled. Graphical User Interfaces (GUIs) with detector oriented view and device oriented view, were also implemented to ease the detector operation. The ARICH SCS is in operation for detector construction and cosmic rays tests. The paper describes the detailed features of the SCS and preliminary results of operation of a reduced set of hardware which confirm the scalability to the full detector.

  9. Study of characteristic response of pressure control system in order to obtain the design parameters of the new control system MARK V1 turbine in Cofrentes nuclear power plant

    International Nuclear Information System (INIS)

    Palomo Anaya, M. Jose; Ruiz Bueno, Gregorio; Vauqer Perez, Juan I.; Curiel Nieva, Marceliano

    2011-01-01

    This paper presents the results obtained from the IBE-CNC/DAQ-090827 project, conducted by the company Titania Servicios Tecnologicos, S.L. in collaboration with the Instituto de Seguridad Industrial, Radiofisica y Medioambiental (ISIRYM), in the Universidad Politecnica de Valencia, for the company Iberdrola Generacion S.A. The objective is the acquisition of the pressure sensor signal and the measurement at points C85 and N32 from the cabin of the Turbine Control System in Cofrentes Nuclear Power Plant. With the study of previous data, one can obtain the Bode plot of the crossed signals as requested in the technical specification IM 0191 I. Frequency response (i.e. how the system varies its gain and offset depending on the frequency) defines the dynamics. (author)

  10. High performance image acquisition and processing architecture for fast plant system controllers based on FPGA and GPU

    International Nuclear Information System (INIS)

    Nieto, J.; Sanz, D.; Guillén, P.; Esquembri, S.; Arcas, G. de; Ruiz, M.; Vega, J.; Castro, R.

    2016-01-01

    Highlights: • To test an image acquisition and processing system for Camera Link devices based in a FPGA, compliant with ITER fast controllers. • To move data acquired from the set NI1483-NIPXIe7966R directly to a NVIDIA GPU using NVIDIA GPUDirect RDMA technology. • To obtain a methodology to include GPUs processing in ITER Fast Plant Controllers, using EPICS integration through Nominal Device Support (NDS). - Abstract: The two dominant technologies that are being used in real time image processing are Field Programmable Gate Array (FPGA) and Graphical Processor Unit (GPU) due to their algorithm parallelization capabilities. But not much work has been done to standardize how these technologies can be integrated in data acquisition systems, where control and supervisory requirements are in place, such as ITER (International Thermonuclear Experimental Reactor). This work proposes an architecture, and a development methodology, to develop image acquisition and processing systems based on FPGAs and GPUs compliant with ITER fast controller solutions. A use case based on a Camera Link device connected to an FPGA DAQ device (National Instruments FlexRIO technology), and a NVIDIA Tesla GPU series card has been developed and tested. The architecture proposed has been designed to optimize system performance by minimizing data transfer operations and CPU intervention thanks to the use of NVIDIA GPUDirect RDMA and DMA technologies. This allows moving the data directly between the different hardware elements (FPGA DAQ-GPU-CPU) avoiding CPU intervention and therefore the use of intermediate CPU memory buffers. A special effort has been put to provide a development methodology that, maintaining the highest possible abstraction from the low level implementation details, allows obtaining solutions that conform to CODAC Core System standards by providing EPICS and Nominal Device Support.

  11. High performance image acquisition and processing architecture for fast plant system controllers based on FPGA and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Nieto, J., E-mail: jnieto@sec.upm.es [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid, Crta. Valencia Km-7, Madrid 28031 (Spain); Sanz, D.; Guillén, P.; Esquembri, S.; Arcas, G. de; Ruiz, M. [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid, Crta. Valencia Km-7, Madrid 28031 (Spain); Vega, J.; Castro, R. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain)

    2016-11-15

    Highlights: • To test an image acquisition and processing system for Camera Link devices based in a FPGA, compliant with ITER fast controllers. • To move data acquired from the set NI1483-NIPXIe7966R directly to a NVIDIA GPU using NVIDIA GPUDirect RDMA technology. • To obtain a methodology to include GPUs processing in ITER Fast Plant Controllers, using EPICS integration through Nominal Device Support (NDS). - Abstract: The two dominant technologies that are being used in real time image processing are Field Programmable Gate Array (FPGA) and Graphical Processor Unit (GPU) due to their algorithm parallelization capabilities. But not much work has been done to standardize how these technologies can be integrated in data acquisition systems, where control and supervisory requirements are in place, such as ITER (International Thermonuclear Experimental Reactor). This work proposes an architecture, and a development methodology, to develop image acquisition and processing systems based on FPGAs and GPUs compliant with ITER fast controller solutions. A use case based on a Camera Link device connected to an FPGA DAQ device (National Instruments FlexRIO technology), and a NVIDIA Tesla GPU series card has been developed and tested. The architecture proposed has been designed to optimize system performance by minimizing data transfer operations and CPU intervention thanks to the use of NVIDIA GPUDirect RDMA and DMA technologies. This allows moving the data directly between the different hardware elements (FPGA DAQ-GPU-CPU) avoiding CPU intervention and therefore the use of intermediate CPU memory buffers. A special effort has been put to provide a development methodology that, maintaining the highest possible abstraction from the low level implementation details, allows obtaining solutions that conform to CODAC Core System standards by providing EPICS and Nominal Device Support.

  12. Design, modeling, simulation and evaluation of a distributed energy system

    Science.gov (United States)

    Cultura, Ambrosio B., II

    needed in order to increase the reliability of the DER system. Furthermore, the new computer-based Data Acquisition (DAQ) system for the DER has been designed and installed. The DAQ system is an important component in PC-based measurement, which is used in acquiring and storing data. The design and installation of signal conditioning improve the accuracy, effectiveness and safety of measurements, because of capabilities such as amplifications, isolation, and filtering. A Labview program was the software used to interface and communicate between the DAQ devices and the personal computer. The overall simulink model of the DER system is presented in the last chapter. The simulink model is discussed, and the discussion explains the operation of a grid connected DER. This model can be used as the basis or future reference for designs and installations of DER projects. This model can also be used in converting the DER grid connected system into a Smart Grid system, and that will be the next potential research work to explore.

  13. Investigation of High-Level Synthesis tools’ applicability to data acquisition systems design based on the CMS ECAL Data Concentrator Card example

    CERN Document Server

    HUSEJKO, Michal; RASTEIRO DA SILVA, Jose Carlos

    2015-01-01

    High-Level Synthesis (HLS) for Field-Programmable Logic Array (FPGA) programming is becoming a practical alternative to well-established VHDL and Verilog languages. This paper describes a case study in the use of HLS tools to design FPGA-based data acquisition systems (DAQ). We will present the implementation of the CERN CMS detector ECAL Data Concentrator Card (DCC) functionality in HLS and lessons learned from using HLS design flow.The DCC functionality and a definition of the initial system-level performance requirements (latency, bandwidth, and throughput) will be presented. We will describe how its packet processing control centric algorithm was implemented with VHDL and Verilog languages. We will then show how the HLS flow could speed up design-space exploration by providing loose coupling between functions interface design and functions algorithm implementation.We conclude with results of real-life hardware tests performed with the HLS flow-generated design with a DCC Tester system.

  14. Investigation of High-Level Synthesis tools’ applicability to data acquisition systems design based on the CMS ECAL Data Concentrator Card example

    Science.gov (United States)

    HUSEJKO, Michal; EVANS, John; RASTEIRO DA SILVA, Jose Carlos

    2015-12-01

    High-Level Synthesis (HLS) for Field-Programmable Logic Array (FPGA) programming is becoming a practical alternative to well-established VHDL and Verilog languages. This paper describes a case study in the use of HLS tools to design FPGA-based data acquisition systems (DAQ). We will present the implementation of the CERN CMS detector ECAL Data Concentrator Card (DCC) functionality in HLS and lessons learned from using HLS design flow. The DCC functionality and a definition of the initial system-level performance requirements (latency, bandwidth, and throughput) will be presented. We will describe how its packet processing control centric algorithm was implemented with VHDL and Verilog languages. We will then show how the HLS flow could speed up design-space exploration by providing loose coupling between functions interface design and functions algorithm implementation. We conclude with results of real-life hardware tests performed with the HLS flow-generated design with a DCC Tester system.

  15. The DISTO data acquisition system at SATURNE

    International Nuclear Information System (INIS)

    Balestra, F.; Bedfer, Y.; Bertini, R.

    1998-01-01

    The DISTO collaboration has built a large-acceptance magnetic spectrometer designed to provide broad kinematic coverage of multiparticle final states produced in pp scattering. The spectrometer has been installed in the polarized proton beam of the Saturne accelerator in Saclay to study polarization observables in the rvec pp → pK + rvec Y (Y = Λ, Σ 0 or Y * ) reaction and vector meson production (ψ, ω and ρ) in pp collisions. The data acquisition system is based on a VME 68030 CPU running the OS/9 operating system, housed in a single VME crate together with the CAMAC interface, the triple port ECL memories, and four RISC R3000 CPU. The digitization of signals from the detectors is made by PCOS III and FERA front-end electronics. Data of several events belonging to a single Saturne extraction are stored in VME triple-port ECL memories using a hardwired fast sequencer. The buffer, optionally filtered by the RISC R3000 CPU, is recorded on a DLT cassette by DAQ CPU using the on-board SCSI interface during the acceleration cycle. Two UNIX workstations are connected to the VME CPUs through a fast parallel bus and the Local Area Network. They analyze a subset of events for on-line monitoring. The data acquisition system is able to read and record 3,500 ev/burst in the present configuration with a dead time of 15%

  16. Dual-Axis Solar Tracking System for Maximum Power Production in PV Systems

    Directory of Open Access Journals (Sweden)

    Muhd.Ikram Mohd. Rashid

    2015-12-01

    Full Text Available The power developed in a solar energy system depends fundamentally upon the amount of sunlight captured by the photovoltaic modules/arrays. This paper describes a simple electro-mechanical dual axis solar tracking system designed and developed in a study. The control of the two axes was achieved by the pulses generated from the data acquisition (DAQ card fed into four relays. This approach was so chosen to effectively avoid the error that usually arises in sensor-based methods. The programming of the mathematical models of the solar elevation and azimuth angles was done using Borland C++ Builder. The performance and accuracy of the developed system was evaluated with a PV panel at latitude 3.53o N and longitude 103.5o W in Malaysia. The results obtained reflect the effectiveness of the developed tracking system in terms of the energy yield when compared with that generated from a fixed panel. Overall, 20%, 23% and 21% additional energy were produced for the months of March, April and May respectively using the tracker developed in this study.

  17. High Availability of RAPIENET

    International Nuclear Information System (INIS)

    Yoon, G.; Oh, J. S.; Kwon, D. H.; Kwon, S. C.; Park, Y. O.

    2012-01-01

    Many industrial customers are no longer satisfies with conventional Ethernet-based communications. They require a more accurate, more flexible, and more reliable technology for their control and measurement systems. Hence, Ethernet-based high-availability networks are becoming an important topic in the control and measurement fields. In this paper, we introduce a new redundant programmable logic controller (PLC) concept, based on real-time automation protocols for industrial Ethernet (RAPIEnet). RAPIEnet has intrinsic redundancy built into its network topology, with hardware-based recovery time. We define a redundant PLC system switching model and demonstrate its performance, including RAPIEnet recovery time

  18. Electronics design of the RPC system for the OPERA muon

    International Nuclear Information System (INIS)

    Acquafredda, R.; Ambrosio, M.; Consiglio, L.

    2004-01-01

    The present document describes the front-end electronics of the RPC system that instruments the magnet muon spectrometer of the OPERA experiment. The main task of the OPERA spectrometer is to provide particle tracking information for muon identification and simplify the matching between the Precision Trackers. As no trigger has been foreseen for the experiment, the spectrometer electronics must be self-triggered with single-plane readout capability. Moreover, precision time information must be added within each event frame for off-line reconstruction. The read-out electronics is made of three different stages: the Front-End Boards (FEBs) system, the Controller Boards (CBs) system and Trigger Boards (TBs) system. The FEB system provides discrimination of the strip incoming signals; a FAST-OR output of the input signals is also available for trigger plane signal generation. FEB signals are required by the CB system that provides the zero suppression and manages the communication to the DAQ and Slow Control. A Trigger Board allows to operate in both self-trigger mode (the FEB's FAST-OR signal starts the plane acquisition) or in external-trigger mode (different conditions can be set on the FAST-OR signals generated from different planes)

  19. Prototype of time digitizing system for BESⅢ endcap TOF upgrade

    International Nuclear Information System (INIS)

    Cao Ping; Sun Weijia; Fan Huanhuan; Wang Siyu; Liu Shubin; An Qi; Ji Xiaolu

    2014-01-01

    The prototype of a time digitizing system for the BESⅢ endcap TOF (ETOF) upgrade is introduced in this paper. The ETOF readout electronics has a distributed architecture. Hit signals from the multi-gap resistive plate chamber (MRPC) are signaled as LVDS by front-end electronics (FEE) and are then sent to the back-end time digitizing system via long shield differential twisted pair cables. The ETOF digitizing system consists of two VME crates, each of which contains modules for time digitization, clock, trigger, fast control, etc. The time digitizing module (TDIG) of this prototype can support up to 72 electrical channels for hit information measurement. The fast control (FCTL) module can operate in barrel or endcap mode. The barrel FCTL fans out fast control signals from the trigger system to the endcap FCTLs, merges data from the endcaps and then transfers to the trigger system. Without modifying the barrel TOF (BTOF) structure, this time digitizing architecture benefits from improved ETOF performance without degrading the BTOF performance. Lab experiments show that the time resolution of this digitizing system can be lower than 20 ps, and the data throughput to the DAQ can be about 92 Mbps. Beam experiments show that the total time resolution can be lower than 45 ps. (authors)

  20. An integrated system for large scale scanning of nuclear emulsions

    Energy Technology Data Exchange (ETDEWEB)

    Bozza, Cristiano, E-mail: kryss@sa.infn.it [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); D’Ambrosio, Nicola [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); De Lellis, Giovanni [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); De Serio, Marilisa [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Di Capua, Francesco [INFN Napoli, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Crescenzo, Antonia [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Ferdinando, Donato [INFN Bologna, viale B. Pichat 6/2, Bologna 40127 (Italy); Di Marco, Natalia [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); Esposito, Luigi Salvatore [Laboratori Nazionali del Gran Sasso, now at CERN, Geneva (Switzerland); Fini, Rosa Anna [INFN Bari, via E. Orabona 4, Bari 70125 (Italy); Giacomelli, Giorgio [University of Bologna and INFN, viale B. Pichat 6/2, Bologna 40127 (Italy); Grella, Giuseppe [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); Ieva, Michela [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Kose, Umut [INFN Padova, via Marzolo 8, Padova (PD) 35131 (Italy); Longhin, Andrea; Mauri, Nicoletta [INFN Laboratori Nazionali di Frascati, via E. Fermi 40, Frascati (RM) 00044 (Italy); Medinaceli, Eduardo [University of Padova and INFN, via Marzolo 8, Padova (PD) 35131 (Italy); Monacelli, Piero [University of L' Aquila and INFN, via Vetoio Loc. Coppito, L' Aquila (AQ) 67100 (Italy); Muciaccia, Maria Teresa; Pastore, Alessandra [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); and others

    2013-03-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m{sup 2} to tens of m{sup 2}, acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing.

  1. An integrated system for large scale scanning of nuclear emulsions

    International Nuclear Information System (INIS)

    Bozza, Cristiano; D’Ambrosio, Nicola; De Lellis, Giovanni; De Serio, Marilisa; Di Capua, Francesco; Di Crescenzo, Antonia; Di Ferdinando, Donato; Di Marco, Natalia; Esposito, Luigi Salvatore; Fini, Rosa Anna; Giacomelli, Giorgio; Grella, Giuseppe; Ieva, Michela; Kose, Umut; Longhin, Andrea; Mauri, Nicoletta; Medinaceli, Eduardo; Monacelli, Piero; Muciaccia, Maria Teresa; Pastore, Alessandra

    2013-01-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m 2 to tens of m 2 , acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing

  2. Update of the Picker C9 irradiator control system of the gamma II room of the secondary laboratory of dosimetric calibration; Actualizacion del sistema de control del irradiador Picker C9 de la sala gamma II del laboratorio secundario de calibracion dosimetrica

    Energy Technology Data Exchange (ETDEWEB)

    Simon S, L. E.

    2016-07-01

    The Picker C9 irradiator is responsible for the calibration of different radiological equipment and the control system that maintains it in operation is designed in the graphical programming software LabVIEW (Laboratory Virtual Instrumentation Engineering Workbench), being its major advantages: the different types of communication, easy interconnection with other software and the recognition of different hardware devices, among others. Operation of the irradiator control system is performed with the NI-Usb-6008 (DAQ) data acquisition module of the National Instruments Company. The purpose of this work is to update the routines that make the Picker C9 control system of the gamma II room of the secondary laboratory of dosimetric calibration, using the graphic programming software LabVIEW, as well as to configure the new acquisition hardware of data that is implemented to control the Picker C9 irradiator system and ensure its operation. (Author)

  3. Labview applications based on field programmable gate array (FPGA) on temperature measurement system of heating-02

    International Nuclear Information System (INIS)

    Kussigit Santosa

    2013-01-01

    Temperature measurements system has been created at the heating-02 test using LabVIEW 2011 software. Making this measurement systems on FPGA is the development of previous a measurement system using the measurement with cDAQ9188. The advantage of this system is the independence of the system means that the execution time can run itself without a computer. The scope of the current study was limited on the development, programming and testing of data acquisition focused on programming of the FPGA modules that have been embedded on the cRIO 9074. In the making of temperature measurement systems is required the data acquisition system by National Texas Instruments cRIO 9074 module, power supply, Ni 9023 module, 7011 HIOKI current source, the software Labview 2011 and the computer. The using method is stringing the temperature measurement system, programming of data acquisition the FPGA as well as the acquisition system interface that is easy to do observations. From the experimental results, it can be concluded that the temperature measurement system can run well. So that the measurement system is expected to be used for the actual measurement. (author)

  4. New operator assistance features in the CMS Run Control System

    Science.gov (United States)

    Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.

    2017-10-01

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.

  5. New Operator Assistance Features in the CMS Run Control System

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J.M.; et al.

    2017-11-22

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.

  6. Neurochemical evidence that cocaine- and amphetamine-regulated transcript (CART) 55-102 peptide modulates the dopaminergic reward system by decreasing the dopamine release in the mouse nucleus accumbens.

    Science.gov (United States)

    Rakovska, Angelina; Baranyi, Maria; Windisch, Katalin; Petkova-Kirova, Polina; Gagov, Hristo; Kalfin, Reni

    2017-09-01

    CART (Cocaine- and Amphetamine-Regulated Transcript) peptide is a neurotransmitter naturally occurring in the CNS and found mostly in nucleus accumbens, ventrotegmental area, ventral pallidum, amygdalae and striatum, brain regions associated with drug addiction. In the nucleus accumbens, known for its significant role in motivation, pleasure, reward and reinforcement learning, CART peptide inhibits cocaine and amphetamine-induced dopamine-mediated increases in locomotor activity and behavior, suggesting a CART peptide interaction with the dopaminergic system. Thus in the present study, we examined the effect of CART (55-102) peptide on the basal, electrical field stimulation-evoked (EFS-evoked) (30V, 2Hz, 120 shocks) and returning basal dopamine (DA) release and on the release of the DA metabolites 3,4-dihydroxyphenyl acetaldehyde (DOPAL), 3,4-dihydroxyphenylacetic acid (DOPAC), homovanillic acid (HVA), 3,4-dihydroxyphenylethanol (DOPET), 3-methoxytyramine (3-MT) as well as on norepinephrine (NE) and dopamine-o-quinone (Daq) in isolated mouse nucleus accumbens, in a preparation, in which any CART peptide effects on the dendrites or soma of ventral tegmental projection neurons have been excluded. We further extended our study to assess the effect of CART (55-102) peptide on basal cocaine-induced release of dopamine and its metabolites DOPAL, DOPAC, HVA, DOPET and 3-MT as well as on NE and Daq. To analyze the amount of [ 3 H]dopamine, dopamine metabolites, Daq and NE in the nucleus accumbens superfusate, a high-pressure liquid chromatography (HPLC), coupled with electrochemical, UV and radiochemical detections was used. CART (55-102) peptide, 0.1μM, added alone, exerted: (i) a significant decrease in the basal and EFS-evoked levels of extracellular dopamine (ii) a significant increase in the EFS-evoked and returning basal levels of the dopamine metabolites DOPAC and HVA, major products of dopamine degradation and (iii) a significant decrease in the returning basal

  7. The digital readout system for the CMS electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Lofstedt, Bo

    2000-01-01

    The CMS Electromagnetic Calorimeter is a high-precision detector demanding innovative solutions in order to cope with the high dynamic range and the extreme high resolution of the detector as well as with the harsh environment created by the high level of radiation and the 4 T magnetic field. The readout system is partly placed within the detector and partly in the adjacent counting room. As the on-detector electronics must cope with the harsh environment the use of standard components is excluded for this part of the system. This paper describes the solutions adopted for the high-precision analogue stages, the A-D conversion, the optical transfer of the raw data from the on-detector part to the so-called Upper Level Readout, placed in the counting room, and the functionality of the latter. The ECAL is instrumental in providing information to the first-level trigger process and the generation of this information will be described. Also, the problem of reducing the raw data volume (6x10 12 bytes/s) to a level that can be handled by the central DAQ system (10 5 bytes/s) without degrading the physics performance will be discussed

  8. A new BPM-TOF system for CologneAMS

    Energy Technology Data Exchange (ETDEWEB)

    Pascovici, Gheorghe; Dewald, Alfred; Heinze, Stefan; Schiffer, Markus; Feuerstein, Mark [CologneAMS, Universitaet Koeln (Germany); Pfeiffer, Michael; Jolie, Jan; Zell, Karl Oskar [IKP, Universitaet Koeln (Germany); Blanckenburg, Friedhelm von [GFZ, Potsdam (Germany)

    2011-07-01

    At the center for accelerator mass spectrometry (CologneAMS) a complex beam detector consisting of a high resolution Beam Profile Monitor (BPM) and a Time of Flight (TOF) spectrometer with tracking capabilities was designed especially for the needs of the Cologne AMS facility. The complex beam detector assembly is designed to match the beam specifications of the 6MV Tandetron AMS setup and its DAQ system, which is presently in the commissioning phase at the IKP of the University of Cologne. The BPM-TOF system will have a reconfigurable structure, namely: either a very fast TOF subsystem with a small active area or a more complex BPM -TOF detector with beam tracking capabilities and with a large active area. The systems aims for background suppression in case of the spectrometry of heavy ions, e.g. U, Cm, Pu, Am etc. and could also be used as an additional filter e.g., for the isobar {sup 36}S in case of the spectrometry of {sup 36}Cl.

  9. LHCb : The LHCb trigger system and its upgrade

    CERN Multimedia

    Dziurda, Agnieszka

    2015-01-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC inelastic collision rate of 30 MHz to 1 MHz, at which the entire detector is read out. In a second level, implemented in a farm of 20k parallel-processing CPUs, the event rate is reduced to about 5 kHz. We review the performance of the LHCb trigger system during Run I of the LHC. Special attention is given to the use of multivariate analyses in the High Level Trigger. The major bottleneck for hadronic decays is the hardware trigger. LHCb plans a major upgrade of the detector and DAQ system in the LHC shutdown of 2018, enabling a purely software based trigger to process the full 30 MHz of inelastic collisions delivered by the LHC. We demonstrate that the planned architecture will be able to meet this challenge. We discuss the use of disk space in the trigger farm to buffer events while performing run-by-run detector calibrations, and the way this real time calibration and subsequent full event reconstruction will allow LHCb to ...

  10. Systems

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    Papers in this session describe the concept of mined geologic disposal system and methods for ensuring that the system, when developed, will meet all technical requirements. Also presented in the session are analyses of system parameters, such as cost and nuclear criticality potential, as well as a technical analysis of a requirement that the system permit retrieval of the waste for some period of time. The final paper discusses studies under way to investigate technical alternatives or complements to the mined geologic disposal system. Titles of the presented papers are: (1) Waste Isolation System; (2) Waste Isolation Economics; (3) BWIP Technical Baseline; (4) Criticality Considerations in Geologic Disposal of High-Level Waste; (5) Retrieving Nuclear Wastes from Repository; (6) NWTS Programs for the Evaluation of Technical Alternatives or Complements to Mined Geologic Repositories - Purpose and Objectives

  11. systems

    Directory of Open Access Journals (Sweden)

    Alexander Leonessa

    2000-01-01

    Full Text Available A nonlinear robust control-system design framework predicated on a hierarchical switching controller architecture parameterized over a set of moving nominal system equilibria is developed. Specifically, using equilibria-dependent Lyapunov functions, a hierarchical nonlinear robust control strategy is developed that robustly stabilizes a given nonlinear system over a prescribed range of system uncertainty by robustly stabilizing a collection of nonlinear controlled uncertain subsystems. The robust switching nonlinear controller architecture is designed based on a generalized (lower semicontinuous Lyapunov function obtained by minimizing a potential function over a given switching set induced by the parameterized nominal system equilibria. The proposed framework robustly stabilizes a compact positively invariant set of a given nonlinear uncertain dynamical system with structured parametric uncertainty. Finally, the efficacy of the proposed approach is demonstrated on a jet engine propulsion control problem with uncertain pressure-flow map data.

  12. The implementation of a data acquisition and service system based on HDF5

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y., E-mail: cheny@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Wang, F.; Li, S. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Xiao, B.J. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); School of nuclear science and technology, University of Science and Technology of China, Hefei, Anhui (China); Yang, F. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Department of Computer Science, Anhui Medical University, Hefei, Anhui (China)

    2016-11-15

    Highlights: • A new data acquisition and service system has been designed and implemented for a new reversed field pinch (RFP) magnetic confinement device. • The new data acquisition and service system is based on HDF5. • It is an entire system including acquisition, storage and data retrieval. • The system is easy to extend and maintain for its modularization design. - Abstract: A data acquisition and service system based on HDF5 has been designed. It includes four components: data acquisition console, data acquisition subsystem, data archive system and data service. The data acquisition console manages all DAQ information and controls the acquisition process. The data acquisition subsystem supports continuous data acquisition with different sampling rates which can be divided into low, medium and high level. All experimental data will be remotely transferred to the data archive system. It adopts HDF5 as its low-level data storage format. The hierarchical data structure of HDF5 is useful for efficiently managing the experimental data and allows users to define special data types and compression filter which can be useful to deal with special signals. Several data service tools have also been developed so that users can get data service via Client/Server or Brower/Server. The system will be demonstrated on Keda Torus eXperiment (KTX) device, which is a new Reversed Field Pinch (RFP) magnetic confinement device. The details are presented in the paper.

  13. The implementation of a data acquisition and service system based on HDF5

    International Nuclear Information System (INIS)

    Chen, Y.; Wang, F.; Li, S.; Xiao, B.J.; Yang, F.

    2016-01-01

    Highlights: • A new data acquisition and service system has been designed and implemented for a new reversed field pinch (RFP) magnetic confinement device. • The new data acquisition and service system is based on HDF5. • It is an entire system including acquisition, storage and data retrieval. • The system is easy to extend and maintain for its modularization design. - Abstract: A data acquisition and service system based on HDF5 has been designed. It includes four components: data acquisition console, data acquisition subsystem, data archive system and data service. The data acquisition console manages all DAQ information and controls the acquisition process. The data acquisition subsystem supports continuous data acquisition with different sampling rates which can be divided into low, medium and high level. All experimental data will be remotely transferred to the data archive system. It adopts HDF5 as its low-level data storage format. The hierarchical data structure of HDF5 is useful for efficiently managing the experimental data and allows users to define special data types and compression filter which can be useful to deal with special signals. Several data service tools have also been developed so that users can get data service via Client/Server or Brower/Server. The system will be demonstrated on Keda Torus eXperiment (KTX) device, which is a new Reversed Field Pinch (RFP) magnetic confinement device. The details are presented in the paper.

  14. UGV Interoperability Profile (IOP) Communications Profile, Version 0

    Science.gov (United States)

    2011-12-21

    Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications IEEE 802.3 Standards for Ethernet based LANs SAE AS5669A JAUS / SDP...OCU Operator Control Unit OFDM Orthogonal Frequency Division Multiplexing OSI Open Systems Interconnection P2I Physical/ Power Interface POE

  15. Timing Analysis of Rate Constrained Traffic for the TTEthernet Communication Protocol

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul; Steiner, Wilfried

    2015-01-01

    Ethernet is a low-cost communication solution offering high transmission speeds. Although its applications extend beyond computer networking, Ethernet is not suitable for real-time and safety-critical systems. To alleviate this, several real-time Ethernet-based communication protocols have been...

  16. Exploring a new paradigm for accelerators and large experimental apparatus control systems

    International Nuclear Information System (INIS)

    Catani, L.; Ammendola, R.; Zani, F.; Bisegni, C.; Ciuffetti, P.; Di Pirro, G.; Mazzitelli, G.; Stecchi, A.; Calabro, S.; Foggetta, L.

    2012-01-01

    The integration of web technologies and web services has been, in the recent years, one of the major trends in upgrading and developing control systems for accelerators and large experimental apparatuses. Usually, web technologies have been introduced to complement the control systems with smart add-on and user friendly services or, for instance, to safely allow access to the control system to users from remote sites. Despite this still narrow spectrum of employment, some software technologies developed for high performance web services, although originally intended and optimized for these particular applications, deserve some features that would allow their deeper integration in a control system and, eventually, using them to develop some of the control system's core components. In this paper we present the conclusion of the preliminary investigations of a new design for an accelerator control system and associated machine data acquisition system (DAQ), based on a synergic combination of network distributed object caching (DOC) and a non-relational key/value database (KVDB). We investigated these technologies with particular interest on performances, namely speed of data storage and retrieve for the distributed caching, data throughput and queries execution time for the database and, especially, how much this performances can benefit from their inherent adaptability. (authors)

  17. 40 Gbps data acquisition system for NectarCAM

    Science.gov (United States)

    Hoffmann, Dirk; Houles, Julien; NectarCAM Team; CTA Consortium, the

    2017-10-01

    The Cherenkov Telescope Array (CTA) will be the next generation ground-based gamma-ray observatory. It will be made up of approximately 100 telescopes of three different sizes, from 4 to 23 meters in diameter. The previously presented prototype of a high speed data acquisition (DAQ) system for CTA (CHEP 2012, [6]) has become concrete within the NectarCAM project, one of the most challenging camera projects with very demanding needs for bandwidth of data handling. We designed a Linux-PC system able to concentrate and process without packet loss the 40 Gb/s average data rate coming from the 265 Front End Boards (FEB) through Gigabit Ethernet links, and to reduce data to fit the two ten-Gigabit Ethernet downstream links by external trigger decisions as well as custom tailored compression algorithms. Within the given constraints, we implemented de-randomisation of the event fragments received as relatively small UDP packets emitted by the FEB, using off-the-shelf equipment as required by the project and for an operation period of at least 30 years. We tested out-of-the-box interfaces and used original techniques to cope with these requirements, and set up a test bench with hundreds of synchronous Gigabit links in order to validate and tune the acquisition chain including downstream data logging based on zeroMQ and Google ProtocolBuffers [8].

  18. Beam Test of the ATLAS Level-1 Calorimeter Trigger System

    CERN Document Server

    Garvey, J; Mahout, G; Moye, T H; Staley, R J; Thomas, J P; Typaldos, D; Watkins, P M; Watson, A; Achenbach, R; Föhlisch, F; Geweniger, C; Hanke, P; Kluge, E E; Mahboubi, K; Meier, K; Meshkov, P; Rühr, F; Schmitt, K; Schultz-Coulon, H C; Ay, C; Bauss, B; Belkin, A; Rieke, S; Schäfer, U; Tapprogge, T; Trefzger, T; Weber, GA; Eisenhandler, E F; Landon, M; Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J; Gee, C N P; Gillman, A R; Mirea, A; Perera, V J O; Qian, W; Sankey, D P C; Bohm, C; Hellman, S; Hidvegi, A; Silverstein, S

    2005-01-01

    The Level-1 Calorimter Trigger consists of a Preprocessor (PP), a Cluster Processor (CP), and a Jet/Energy-sum Processor (JEP). The CP and JEP receive digitised trigger-tower data from the Preprocessor and produce Region-of-Interest (RoIs) and trigger multiplicities. The latter are sent in real time to the Central Trigger Processor (CTP) where the Level-1 decision is made. On receipt of a Level-1 Accept, Readout Driver Modules (RODs), provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purpose. RoI information is sent to the RoI builder (RoIB) to help reduce the amount of data required for the Level-2 Trigger The Level-1 Calorimeter Trigger System at the test beam consisted of 1 Preprocessor module, 1 Cluster Processor Module, 1 Jet/Energy Module and 2 Common Merger Modules. Calorimeter energies were sucessfully handled thourghout the chain and trigger object sent to the CTP. Level-1 Accepts were sucessfully produced and used to drive the readout path. Online diagno...

  19. System aspects of the ILC-electronics and power pulsing

    CERN Document Server

    Götlicher, P

    2007-01-01

    The requirements for the electronics of an experiment at the international linear collider (ILC) are driven by the bunch structure of the accelerator - short trains (1ms) with bunch to bunch lag of 0.3μs interrupted by long empty intervals (199ms) - and the precision physics. Based on developments of the CALICEcollaboration a system for high granular dense calorimetry is presented. The talk covers the system aspect: — of compact sensors as Si-diodes and multi-pixel Geiger mode photo sensors, — of the electromechanics with components embedded into the PCB’s, — of integrating the functionality needed nearby the sensor into low power ASIC’s, — of a DAQ-chain, in which each channel triggers on its own and the data selection is installed into PC’s and — of calibrating the calorimeter. With the high number of 100 million channels the power consumption and cooling have to be investigated carefully. Calculations demonstrate, that active cooling inside the calorimeters can be avoided. But essential fo...

  20. An Update on ConSys Including a New LabVIEW FPGA Based LLRF System

    DEFF Research Database (Denmark)

    Worm, Torben; Nielsen, Jørgen S.

    . This system use a National Instruments NI-PCIe7852R DAQ card, which includes an on-board FPGA and are hosted in a standard PC. The fast (50 kHz) amplitude loop has been implemented on the FPGA, whereas the slower tuning and phase loops are implemented in the real-time system. An operator interface including......ConSys, the Windows based control system for ASTRID and ASTRID2, is now a mature system, having been in operation for more than 15 years. All the standard programs (Console, plots, data logging, control setting store/restore etc.) are fully general and are configured through a database or file. Con......Sys is a standard publisher/subscriber system, where all nodes can act both as client and server. One very strong feature is the easy ability to make virtual devices (devices which do not depend on hardware directly, but combine hardware parameters.) For ASTRID2 a new LabVIEW based Low-Level RF system has been made...

  1. Status, recent developments and perspective of TINE-powered video system, release 3

    International Nuclear Information System (INIS)

    Weisse, S.; Melkumyan, D.; Duval, P.

    2012-01-01

    Experience has shown that imaging software and hardware installations at accelerator facilities needs to be changed, adapted and updated on a semi-permanent basis. On this premise the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, inter operability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the past year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, the development path has been more strongly influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64 bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered. (authors)

  2. FE-I4 pixel chip characterization with USBpix3 test system

    Energy Technology Data Exchange (ETDEWEB)

    Filimonov, Viacheslav; Gonella, Laura; Hemperek, Tomasz; Huegging, Fabian; Janssen, Jens; Krueger, Hans; Pohl, David-Leon; Wermes, Norbert [University of Bonn, Bonn (Germany)

    2015-07-01

    The USBpix readout system is a small and light weighting test system for the ATLAS pixel readout chips. It is widely used to operate and characterize FE-I4 pixel modules in lab and test beam environments. For multi-chip modules the resources on the Multi-IO board, that is the central control unit of the readout system, are coming to their limits, which makes the simultaneous readout of more than one chip at a time challenging. Therefore an upgrade of the current USBpix system has been developed. The upgraded system is called USBpix3 - the main focus of the talk. Characterization of single chip FE-I4 modules was performed with USBpix3 prototype (digital, analog, threshold and source scans; tuning). PyBAR (Bonn ATLAS Readout in Python scripting language) was used as readout software. PyBAR consists of FEI4 DAQ and Data Analysis Libraries in Python. The presentation describes the USBpix3 system, results of FE-I4 modules characterization and preparation for the multi-chip module and multi-module readout with USBpix3.

  3. Evolution of the Trigger and Data Acquisition System for the ATLAS experiment

    CERN Document Server

    Negri, A; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data at unprecedented energy and rates. The TDAQ is composed of three levels which reduces the event rate from the design bunch-crossing rate of 40 MHz to an average event recording rate of about 200 Hz. The first part of this paper gives an overview of the operational performance of the DAQ system during 2011 and the first months of data taking in 2012. It describes how the flexibility inherent in the design of the system has be exploited to meet the changing needs of ATLAS data taking and in some cases push performance beyond the original design performance specification. The experience accumulated in the TDAQ system operation during these years stimulated also interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the second trigger level (L2), ...

  4. Embedded multi-channel data acquisition system on FPGA for Aditya Tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Rajpal, Rachana, E-mail: rachana@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Mandaliya, Hitesh, E-mail: hitesh@ipr.res.in [ITER, Cadarache (France); Patel, Jignesh, E-mail: jjp@ipr.res.in [ITER, Cadarache (France); Kumari, Praveena, E-mail: praveena@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Gautam, Pramila, E-mail: pramila@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Raulji, Vismaysinh, E-mail: vismay@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Edappala, Praveenlal, E-mail: praveen@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Pujara, H.D, E-mail: pujara@ipr.res [Institute for Plasma Research, Gandhinagar, Gujarat (India); Jha, R., E-mail: jha@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India)

    2016-11-15

    Highlights: • 64 channel data acquisition, interface to PC/104 bus, using single board computer. • Integration of all components in single hardware to make it standalone and portable. • Development of application software in Qt on Linux platform for better performance and low cost compared to Windows. • Explored and utilized FPGA resources for hardware interfacing. - Abstract: The 64 channel data acquisition board is designed to meet the future demand of acquisition channels for plasma diagnostics. The inherent features of the board are 16 bit resolution, programmable sampling rate upto 200 kS/s/ch and simultaneous acquisition. To make system embedded and compact, 8 Analog Inputs ADC chip, 4M × 16 bit RAM memory, Field Programmable Gate Arrays, PC/104 platform and single board computer are used. High speed timing control signals for all ADCs and RAMs are generated by FPGA. The system is standalone, portable and interface through Ethernet. The acquisition application is developed in Qt. on Linux platform, in SBC. Due to ethernet connectivity and onboard processing, system can be integrated into Aditya and SST-1 data acquisition system. The performance of hardware is tested on Linux and Windows Embedded OS. The paper describes design, hardware and software architecture, implementation and results of 64 channel DAQ system.

  5. Embedded multi-channel data acquisition system on FPGA for Aditya Tokamak

    International Nuclear Information System (INIS)

    Rajpal, Rachana; Mandaliya, Hitesh; Patel, Jignesh; Kumari, Praveena; Gautam, Pramila; Raulji, Vismaysinh; Edappala, Praveenlal; Pujara, H.D; Jha, R.

    2016-01-01

    Highlights: • 64 channel data acquisition, interface to PC/104 bus, using single board computer. • Integration of all components in single hardware to make it standalone and portable. • Development of application software in Qt on Linux platform for better performance and low cost compared to Windows. • Explored and utilized FPGA resources for hardware interfacing. - Abstract: The 64 channel data acquisition board is designed to meet the future demand of acquisition channels for plasma diagnostics. The inherent features of the board are 16 bit resolution, programmable sampling rate upto 200 kS/s/ch and simultaneous acquisition. To make system embedded and compact, 8 Analog Inputs ADC chip, 4M × 16 bit RAM memory, Field Programmable Gate Arrays, PC/104 platform and single board computer are used. High speed timing control signals for all ADCs and RAMs are generated by FPGA. The system is standalone, portable and interface through Ethernet. The acquisition application is developed in Qt. on Linux platform, in SBC. Due to ethernet connectivity and onboard processing, system can be integrated into Aditya and SST-1 data acquisition system. The performance of hardware is tested on Linux and Windows Embedded OS. The paper describes design, hardware and software architecture, implementation and results of 64 channel DAQ system.

  6. High rate tests of the LHCb RICH Upgrade system

    CERN Multimedia

    Blago, Michele Piero

    2016-01-01

    One of the biggest challenges for the upgrade of the LHCb RICH detectors from 2020 is to readout the photon detectors at the full 40 MHz rate of the LHC proton-proton collisions. A test facility has been setup at CERN with the purpose to investigate the behaviour of the Multi Anode PMTs, which have been proposed for the upgrade, and their readout electronics at high trigger rates. The MaPMTs are illuminated with a monochromatic laser that can be triggered independently of the readout electronics. A first series of tests, including threshold scans, is performed at low trigger rates (20 kHz) for both the readout and the laser with the purpose to characterise the behaviour of the system under test. Then the trigger rate is increased in two separate steps. First the MaPMTs are exposed to high illumination by triggering the pulsed laser at a high (20 MHz) repetition rate while the DAQ is readout at the same low rate as before. In this way the performance of the MaPMTs and the attached electronics can be evaluated ...

  7. Development and implementation of a new trigger and data acquisition system for the HADES detector

    Energy Technology Data Exchange (ETDEWEB)

    Michel, Jan

    2012-11-16

    One of the crucial points of instrumentation in modern nuclear and particle physics is the setup of data acquisition systems (DAQ). In collisions of heavy ions, particles of special interest for research are often produced at very low rates resulting in the need for high event rates and a fast data acquisition. Additionally, the identification and precise tracking of particles requires fast and highly granular detectors. Both requirements result in very high data rates that have to be transported within the detector read-out system: Typical experiments produce data at rates of 200 to 1,000 MByte/s. The structure of the trigger and read-out systems of such experiments is quite similar: A central instance generates a signal that triggers read-out of all sub-systems. The signals from each detector system are then processed and digitized by front-end electronics before they are transported to a computing farm where data is analyzed and prepared for long-term storage. Some systems introduce additional steps (high level triggers) in this process to select only special types of events to reduce the amount of data to be processed later. The main focus of this work is put on the development of a new data acquisition system for the High Acceptance Di-Electron Spectrometer HADES located at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany. Fully operational since 2002, its front-end electronics and data transport system were subject to a major upgrade program. The goal was an increase of the event rate capabilities by a factor of more than 20 to reach event rates of 20 kHz in heavy ion collisions and more than 50 kHz in light collision systems. The new electronics are based on FPGA-equipped platforms distributed throughout the detector. Data is transported over optical fibers to reduce the amount of electromagnetic noise induced in the sensitive front-end electronics. Besides the high data rates of up to 500 MByte/s at the design event rate of 20 kHz, the

  8. Development of a tracking detector system with multichannel scintillation fibers and PPD

    Energy Technology Data Exchange (ETDEWEB)

    Honda, R., E-mail: honda@lambda.phys.tohoku.ac.jp [Tohoku University, 6-3, Aramaki, Aoba-ku, Sendai, Miyagi 980-8578 (Japan); Japan Atomic Energy Agency (JAEA), 2-4, Shirakata, Shirane, Tokai, Ibaraki 319-1195 (Japan); Callier, S. [IN2P3/LAL, 91898 Orsay Cedex (France); Hasegawa, S. [Japan Atomic Energy Agency (JAEA), 2-4, Shirakata, Shirane, Tokai, Ibaraki 319-1195 (Japan); Ieiri, M. [High Energy Accelerator Research Organization (KEK), 1-1, Oho, Tsukuba 305-0801 (Japan); Matsumoto, Y.; Miwa, K. [Tohoku University, 6-3, Aramaki, Aoba-ku, Sendai, Miyagi 980-8578 (Japan); Nakamura, I. [High Energy Accelerator Research Organization (KEK), 1-1, Oho, Tsukuba 305-0801 (Japan); Raux, L.; De La Taille, C. [IN2P3/LAL, 91898 Orsay Cedex (France); Tanaka, M.; Uchida, T.; Yoshimura, K. [High Energy Accelerator Research Organization (KEK), 1-1, Oho, Tsukuba 305-0801 (Japan)

    2012-12-11

    For the J-PARC E40 experiment which aims to measure differential cross-sections of {Sigma}p scatterings, a system to detect scattered proton from {Sigma}p scatterings is under development. The detection system consists of scintillation fibers with a MPPC readout. A prototype and a readout electronics for MPPC have already been developed. The prototype consisting of a scintillation fiber tracker and a BGO calorimeter was tested with a proton beam of 80 MeV. Energy resolutions of the tracker of 22.0% ({sigma}) and the calorimeter of 1.0% ({sigma}) were obtained for 1 MeV and 70 MeV energy deposit, respectively. The prototype readout electronics has an ASIC for multichannel operation, EASIROC, and a Silicon TCP (SiTCP) interface to communicate with a DAQ system. Its data transfer rate measured was 14 kHz. Required performances for the prototype system have been achieved except for the energy resolution of the prototype fiber tracker.

  9. The ATLAS Data Acquisition and High Level Trigger Systems: Experience and Upgrade Plans

    CERN Document Server

    Hauser, R; The ATLAS collaboration

    2012-01-01

    The ATLAS DAQ/HLT system reduces the Level 1 rate of 75 kHz to a few kHz event build rate after Level 2 and a few hundred Hz out output rate to disk. It has operated with an average data taking efficiency of about 94% during the recent years. The performance has far exceeded the initial requirements, with about 5 kHz event building rate and 500 Hz of output rate in 2012, driven mostly by physics requirements. Several improvements and upgrades are foreseen in the upcoming long shutdowns, both to simplify the existing architecture and improve the performance. On the network side new core switches will be deployed and possible use of 10GBit Ethernet links for critical areas is foreseen. An improved read-out system to replace the existing solution based on PCI is under development. A major evolution of the high level trigger system foresees a merging of the Level 2 and Event Filter functionality on a single node, including the event building. This will represent a big simplification of the existing system, while ...

  10. Development of electronics and data acquisition system for independent calibration of electron cyclotron emission radiometer

    Energy Technology Data Exchange (ETDEWEB)

    Kumari, Praveena, E-mail: praveena@ipr.res.in; Raulji, Vismaysinh; Mandaliya, Hitesh; Patel, Jignesh; Siju, Varsha; Pathak, S.K.; Rajpal, Rachana; Jha, R.

    2016-11-15

    Highlights: • Indigenous development of an electronics and data acquisition system to digitize signals for a desired time and automatization of calibration process. • 16 bit DAQ board with form factor of 90 × 89 mm. • VHDL Codes written for generating control signals for PC104 Bus, ADC and RAM. • Averaging process is done in two ways single point averaging and additive averaging. - Abstract: Signal conditioning units (SCU) along with Multichannel Data acquisition system (DAS) are developed and installed for automatization and frequent requirement of absolute calibration of ECE radiometer system. The DAS is an indigenously developed economical system which is based on Single Board Computer (SBC). The onboard RAM memory of 64 K for each channel enables the DAS for simultaneous and continuous acquisition. A Labview based graphical user interface provides commands locally or remotely to acquire, process, plot and finally save the data in binary format. The microscopic signals received from radiometer are strengthened, filtered by SCU and acquired through DAS for the set time and at set sampling frequency. Stored data are processed and analyzed offline with Labview utility. The calibration process has been performed for two hours continuously at different sampling frequency (100 Hz to 1 KHz) at two set of temperature like hot body and the room temperature. The detailed hardware and software design and testing results are explained in the paper.

  11. The Influence of the Density of Coconut Fiber as Stack in Thermo-Acoustics Refrigeration System

    Science.gov (United States)

    Hartulistiyoso, E.; Yulianto, M.; Sucahyo, L.

    2018-05-01

    An experimental study of using coconut fiber as stack with varying density in thermo-acoustics refrigeration system has been done. Stack is a device which is described as the “heart” in thermo-acoustics refrigeration system. The length of stack is a fix parameter in this experiment. The performance of the coconut fiber was evaluated from the density of stack (varied from 30%, 50% and 70%), position of stack (varied from 0 to 34 cm from the sound generator), and frequency of sound generator (varied from 150 Hz, 200Hz, 250Hz and 300Hz). The inside, outside, and environment temperatures were collected every second using Data Acquisition (DAQ). The result showed that the increase of stack density will increase the performance of thermo-acoustics refrigeration system. The higher density produced temperature differences in cold side and hot side of 5.4°C. In addition, the position of stack and frequency of sound generator have an important role in the performance of thermo-acoustics refrigeration system for all variations of the density.

  12. SYSTEM

    Directory of Open Access Journals (Sweden)

    K. Swarnalatha

    2013-01-01

    Full Text Available Risk analysis of urban aquatic systems due to heavy metals turns significant due to their peculiar properties viz. persis tence, non-degradab ility, toxicity, and accumulation. Akkulam Veli (AV, an urba n tropical lake in south India is subjected to various environmental stresses due to multiple waste discharge, sand mining, developmental activities, tour ism related activitie s etc. Hence, a comprehensive approach is adopted for risk assessment using modified degree of contamination factor, toxicity units based on numerical sediment quality guidelines (SQGs, and potentialecological risk indices. The study revealed the presence of toxic metals such as Cr, C d, Pb and As and the lake is rated under ‘low ecological risk’ category.

  13. Precision Time Protocol support hardware for ATCA control and data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Correia, Miguel, E-mail: miguelfc@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Sousa, Jorge; Carvalho, Bernardo B.; Santos, Bruno; Carvalho, Paulo F.; Rodrigues, António P.; Combo, Álvaro M.; Pereira, Rita C. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Correia, Carlos M.B.A. [Centro de Instrumentação, Departamento de Física, Universidade de Coimbra, 3004-516 Coimbra (Portugal); Gonçalves, Bruno [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • ATCA based control and data acquisition subsystem has been developed at IPFN. • PTP and time stamping were implemented with VHDL and PTP daemon (PTPd) codes. • The RTM (…) provides PTP synchronization with an external GMC. • The main advantage is that timestamps are generated closer to the Physical Layer at the GMII. • IPFN's upgrade consistently exhibited jitter values below 25 ns RMS. - Abstract: An in-house, Advanced Telecom Computing Architecture (ATCA) based control and data acquisition (C&DAQ) subsystem has been developed at Instituto de Plasmas e Fusão Nuclear (IPFN), aiming for compliance with the ITER Fast Plant System Controller (FPSC). Timing and synchronization for the ATCA modules connects to ITER Control, Data Access and Communication (CODAC) through the Timing Communication Network (TCN), which uses IEEE 1588-2008 Precision Time Protocol (PTP) to synchronize devices to a Grand Master Clock (GMC). The TCN infrastructure was tested for an RMS jitter under the limit of 50 ns. Therefore, IPFN's hardware, namely the ATCA-PTSW-AMC4 hub-module, which is in charge of timing and synchronization distribution for all subsystem endpoints, shall also perform within this jitter limit. This paper describes a relevant upgrade, applied to the ATCA-PTSW-AMC4 hardware, to comply with these requirements – in particular, the integration of an add-on module “RMC-TMG-1588” on its Rear Transition Module (RTM). This add-on is based on a commercial FPGA-based module from Trenz Electronic, using the ZHAW “PTP VHDL code for timestamping unit and clock”, which features clock offset and drift correction and hardware-assisted time stamping. The main advantage is that timestamps are generated closer to the Physical Layer, at the Gigabit Ethernet Media Independent Interface (GMII), avoiding the timing uncertainties accumulated through the upper layers. PTP code and user software run in a MicroBlaze™ soft-core CPU with Linux in the

  14. Precision Time Protocol support hardware for ATCA control and data acquisition system

    International Nuclear Information System (INIS)

    Correia, Miguel; Sousa, Jorge; Carvalho, Bernardo B.; Santos, Bruno; Carvalho, Paulo F.; Rodrigues, António P.; Combo, Álvaro M.; Pereira, Rita C.; Correia, Carlos M.B.A.; Gonçalves, Bruno

    2015-01-01

    Highlights: • ATCA based control and data acquisition subsystem has been developed at IPFN. • PTP and time stamping were implemented with VHDL and PTP daemon (PTPd) codes. • The RTM (…) provides PTP synchronization with an external GMC. • The main advantage is that timestamps are generated closer to the Physical Layer at the GMII. • IPFN's upgrade consistently exhibited jitter values below 25 ns RMS. - Abstract: An in-house, Advanced Telecom Computing Architecture (ATCA) based control and data acquisition (C&DAQ) subsystem has been developed at Instituto de Plasmas e Fusão Nuclear (IPFN), aiming for compliance with the ITER Fast Plant System Controller (FPSC). Timing and synchronization for the ATCA modules connects to ITER Control, Data Access and Communication (CODAC) through the Timing Communication Network (TCN), which uses IEEE 1588-2008 Precision Time Protocol (PTP) to synchronize devices to a Grand Master Clock (GMC). The TCN infrastructure was tested for an RMS jitter under the limit of 50 ns. Therefore, IPFN's hardware, namely the ATCA-PTSW-AMC4 hub-module, which is in charge of timing and synchronization distribution for all subsystem endpoints, shall also perform within this jitter limit. This paper describes a relevant upgrade, applied to the ATCA-PTSW-AMC4 hardware, to comply with these requirements – in particular, the integration of an add-on module “RMC-TMG-1588” on its Rear Transition Module (RTM). This add-on is based on a commercial FPGA-based module from Trenz Electronic, using the ZHAW “PTP VHDL code for timestamping unit and clock”, which features clock offset and drift correction and hardware-assisted time stamping. The main advantage is that timestamps are generated closer to the Physical Layer, at the Gigabit Ethernet Media Independent Interface (GMII), avoiding the timing uncertainties accumulated through the upper layers. PTP code and user software run in a MicroBlaze™ soft-core CPU with Linux in the same FPGA

  15. The Front-End Concentrator card for the RD51 Scalable Readout System

    International Nuclear Information System (INIS)

    Toledo, J; Esteve, R; Monzó, J M; Tarazona, A; Muller, H; Martoiu, S

    2011-01-01

    Conventional readout systems exist in many variants since the usual approach is to build readout electronics for one given type of detector. The Scalable Readout System (SRS) developed within the RD51 collaboration relaxes this situation considerably by providing a choice of frontends which are connected over a customizable interface to a common SRS DAQ architecture. This allows sharing development and production costs among a large base of users as well as support from a wide base of developers. The Front-end Concentrator card (FEC), a RD51 common project between CERN and the NEXT Collaboration, is a reconfigurable interface between the SRS online system and a wide range of frontends. This is accomplished by using application-specific adapter cards between the FEC and the frontends. The ensemble (FEC and adapter card are edge mounted) forms a 6U × 220 mm Eurocard combo that fits on a 19'' subchassis. Adapter cards exist already for the first applications and more are in development.

  16. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Green, B; Kugel, A; Joos, M; Panduro Vazquez, W; Schumacher, J; Teixeira-Dias, P; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS DAQ system. It receives and buffers data of events accepted by the first-level trigger from all subdetectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a 1 GbE-based network. The ATLAS ROS is completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3, to replace obsolete technologies and space constraints require it to be compact. The new ROS will consist of roughly 100 Linux-based 2U high rack mounted server PCs, each equipped with 2 PCIe I/O cards and two four 10 GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, the so-called RobinNP firmware. They will provide the connectivity to about 2000 optical point-to-point links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and ...

  17. Reactor control system upgrade for the McClellan Nuclear Radiation Center Sacramento, CA

    International Nuclear Information System (INIS)

    Power, M. A.

    1999-01-01

    information to either keep the reactor operating or to shut the reactor down. In addition to new developments in the signal processing realm, the new control system will be migrating from a PC-based computer platform to a Sun Solaris-based computer platform. The proven history of stability and performance of the Sun Sohuis operating system are the main advantages to this change. The I/O system will also be migrating from a PC-based data collection system, which communicates plant data to the control computer using RS-232 connections, to an Ethernet-based I/O system. The Ethernet Data Acquisition System (EDAS) modules from Intelligent Instrumentation, Inc. provide an excellent solution for embedded control of a system using the more universally-accepted data transmission standard of TCP/IP. The modules contain a PROM, which operates all of the functionality of the I/O module, including the TCP/IP network access. Thus the module does not have an internal, sophisticated operating system to provide functionality but rather a small set hard-coded of instructions, which almost eliminates the possibility of the module failing due to software problems. An internal EEPROM can be modified over the Internet to change module configurations. Once configured, the module is contacted just like any other Internet host using TCP/IP socket calls. The main advantage to this architecture is its flexibility, expandability, and high throughput

  18. Development of a beam test telescope based on the Alibava readout system

    International Nuclear Information System (INIS)

    Marco-Hernandez, R

    2011-01-01

    A telescope for a beam test have been developed as a result of a collaboration among the University of Liverpool, Centro Nacional de Microelectronica (CNM) of Barcelona and Instituto de Fisica Corpuscular (IFIC) of Valencia. This system is intended to carry out both analogue charge collection and spatial resolution measurements with different types of microstrip or pixel silicon detectors in a beam test environment. The telescope has four XY measurement as well as trigger planes (XYT board) and it can accommodate up to twelve devices under test (DUT board). The DUT board uses two Beetle ASICs for the readout of chilled silicon detectors. The board could operate in a self-triggering mode. The board features a temperature sensor and it can be mounted on a rotary stage. A peltier element is used for cooling the DUT. Each XYT board measures the track space points using two silicon strip detectors connected to two Beetle ASICs. It can also trigger on the particle tracks in the beam test. The board includes a CPLD which allows for the synchronization of the trigger signal to a common clock frequency, delaying and implementing coincidence with other XYT boards. An Alibava mother board is used to read out and to control each XYT/DUT board from a common trigger signal and a common clock signal. The Alibava board has a TDC on board to have a time stamp of each trigger. The data collected by each Alibava board is sent to a master card by means of a local data/address bus following a custom digital protocol. The master board distributes the trigger, clock and reset signals. It also merges the data streams from up to sixteen Alibava boards. The board has also a test channel for testing in a standard mode a XYT or DUT board. This board is implemented with a Xilinx development board and a custom patch board. The master board is connected with the DAQ software via 100M Ethernet. Track based alignment software has also been developed for the data obtained with the DAQ software.

  19. Development of a beam test telescope based on the Alibava readout system

    Science.gov (United States)

    Marco-Hernández, R.

    2011-01-01

    A telescope for a beam test have been developed as a result of a collaboration among the University of Liverpool, Centro Nacional de Microelectrónica (CNM) of Barcelona and Instituto de Física Corpuscular (IFIC) of Valencia. This system is intended to carry out both analogue charge collection and spatial resolution measurements with different types of microstrip or pixel silicon detectors in a beam test environment. The telescope has four XY measurement as well as trigger planes (XYT board) and it can accommodate up to twelve devices under test (DUT board). The DUT board uses two Beetle ASICs for the readout of chilled silicon detectors. The board could operate in a self-triggering mode. The board features a temperature sensor and it can be mounted on a rotary stage. A peltier element is used for cooling the DUT. Each XYT board measures the track space points using two silicon strip detectors connected to two Beetle ASICs. It can also trigger on the particle tracks in the beam test. The board includes a CPLD which allows for the synchronization of the trigger signal to a common clock frequency, delaying and implementing coincidence with other XYT boards. An Alibava mother board is used to read out and to control each XYT/DUT board from a common trigger signal and a common clock signal. The Alibava board has a TDC on board to have a time stamp of each trigger. The data collected by each Alibava board is sent to a master card by means of a local data/address bus following a custom digital protocol. The master board distributes the trigger, clock and reset signals. It also merges the data streams from up to sixteen Alibava boards. The board has also a test channel for testing in a standard mode a XYT or DUT board. This board is implemented with a Xilinx development board and a custom patch board. The master board is connected with the DAQ software via 100M Ethernet. Track based alignment software has also been developed for the data obtained with the DAQ software.

  20. Development of a beam test telescope based on the Alibava readout system

    Energy Technology Data Exchange (ETDEWEB)

    Marco-Hernandez, R, E-mail: rmarco@ific.uv.es [Intituto de Fisica Corpuscular (CSIC-UV), Edificicio Institutos de Investigacion, PolIgono de La Coma, s/n. E-46980 Paterna (Valencia) (Spain)

    2011-01-15

    A telescope for a beam test have been developed as a result of a collaboration among the University of Liverpool, Centro Nacional de Microelectronica (CNM) of Barcelona and Instituto de Fisica Corpuscular (IFIC) of Valencia. This system is intended to carry out both analogue charge collection and spatial resolution measurements with different types of microstrip or pixel silicon detectors in a beam test environment. The telescope has four XY measurement as well as trigger planes (XYT board) and it can accommodate up to twelve devices under test (DUT board). The DUT board uses two Beetle ASICs for the readout of chilled silicon detectors. The board could operate in a self-triggering mode. The board features a temperature sensor and it can be mounted on a rotary stage. A peltier element is used for cooling the DUT. Each XYT board measures the track space points using two silicon strip detectors connected to two Beetle ASICs. It can also trigger on the particle tracks in the beam test. The board includes a CPLD which allows for the synchronization of the trigger signal to a common clock frequency, delaying and implementing coincidence with other XYT boards. An Alibava mother board is used to read out and to control each XYT/DUT board from a common trigger signal and a common clock signal. The Alibava board has a TDC on board to have a time stamp of each trigger. The data collected by each Alibava board is sent to a master card by means of a local data/address bus following a custom digital protocol. The master board distributes the trigger, clock and reset signals. It also merges the data streams from up to sixteen Alibava boards. The board has also a test channel for testing in a standard mode a XYT or DUT board. This board is implemented with a Xilinx development board and a custom patch board. The master board is connected with the DAQ software via 100M Ethernet. Track based alignment software has also been developed for the data obtained with the DAQ software.

  1. Operational performance of the ATLAS trigger and data acquisition system and its possible evolution

    CERN Document Server

    Negri, A; The ATLAS collaboration

    2012-01-01

    The experience accumulated in the ATLAS DAQ/HLT system operation during these years stimulated interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the second trigger level (L2), the Event Builder (EB), and the Event Filter (EF) - within a single homogeneous one in which each HLT node executes all the steps required by the trigger and data acquisition process. Each L1 event is assigned to an available HLT node which executes the L2 algorithms using a subset of the event data and, upon positive selection, builds the event, which is further processed by the EF algorithms. Appealing aspects of this design are: a simplification of the software architecture and of its configuration, a better exploitation of the computing resources, the caching of fragments already collected for L2 processing, the automated load balancing between L2 and EF selection steps, the sharing of code and services on HLT nodes. Furthermore, the full treatmen...

  2. The IceCube data acquisition system: Signal capture, digitization,and timestamping

    Energy Technology Data Exchange (ETDEWEB)

    The IceCube Collaboration; Matis, Howard

    2009-03-02

    IceCube is a km-scale neutrino observatory under construction at the South Pole with sensors both in the deep ice (InIce) and on the surface (IceTop). The sensors, called Digital Optical Modules (DOMs), detect, digitize and timestamp the signals from optical Cherenkov-radiation photons. The DOM Main Board (MB) data acquisition subsystem is connected to the central DAQ in the IceCube Laboratory (ICL) by a single twisted copper wire-pair and transmits packetized data on demand. Time calibration ismaintained throughout the array by regular transmission to the DOMs of precisely timed analog signals, synchronized to a central GPS-disciplined clock. The design goals and consequent features, functional capabilities, and initial performance of the DOM MB, and the operation of a combined array of DOMs as a system, are described here. Experience with the first InIce strings and the IceTop stations indicates that the system design and performance goals have been achieved.

  3. Development and test of the readout system for the CBM-MVD prototype

    Energy Technology Data Exchange (ETDEWEB)

    Milanovic, Borislav; Neuman, Bertram; Wiebusch, Michael; Amar-Youcef, Samir; Froehlich, Ingo; Stroth, Joachim [Institut fuer Kernphysik, Goethe-Universitaet Frankfurt, Frankfurt am Main (Germany); Collaboration: CRESST-Collaboration; CBM-MVD Collaboration

    2013-07-01

    The CBM Experiment at FAIR aims towards better understanding of the QCD phase-diagram and in-medium properties of matter under high densities. In order to enhance the detection of rare probes via their secondary decay vertices and to support the primary tracking system, the CBM Micro Vertex Detector (MVD) is foreseen. Recently, the MVD Prototype has been developed at the IKF in Frankfurt. The module contains one quarter of the first MVD station featuring four prototype-sensors MIMOSA-26 AHR thinned down to 50 μ m. The prototype has been tested at the CERN SPS accelerator with high energetic pions in November 2012. This contribution discusses the stability and scalability of the DAQ, slow-control and monitoring routines during the beamtime, as well as sensor behavior under high load of up to 700 000 particles per second. The readout system partially uses hardware from the HADES detector which will also run at FAIR. Readout rates of 98 MB/s at the limit of gigabit ethernet have been achieved showing no sign of data loss or corruption.

  4. Measurement of rare probes with the silicon tracking system of the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    Heuser, Johann; Friese, Volker

    2014-01-01

    The Compressed Baryonic Matter (CBM) experiment at FAIR will explore the phase diagram of strongly interacting matter at highest net baryon densities and moderate temperatures. The CBM physics program will be started with beams delivered by the SIS 100 synchrotron, providing energies from 2 to 14 GeV/nucleon for heavy nuclei, up to 14 GeV/nucleon for light nuclei, and 29 GeV for protons. The highest net baryon densities will be explored with ion beams up to 45 GeV/nucleon energy delivered by SIS 300 in the next stage of FAIR. Collision rates up to 10 7 per second are required to produce very rare probes with unprecedented statistics in this energy range. Their signatures are complex. These conditions call for detector systems designed to meet the extreme requirements in terms of rate capability, momentum and spatial resolution, and a novel DAQ and trigger concept which is not limited by latency but by throughput. In this paper we outline the concepts of CBM's central detector, the Silicon Tracking System, and of the First-Level Event Selector, a dedicated computing farm to reduce on-line the raw data volume by up to three orders of magnitude to a recordable rate. Progress with the development of detector and software algorithms are discussed and examples of performance studies on the reconstruction of rare probes at SIS 100 and SIS 300 energies given

  5. The Detector Control System of the ATLAS experiment at CERN An application to the calibration of the modules of the Tile Hadron Calorimeter

    CERN Document Server

    Varelá-Rodriguez, F

    2002-01-01

    The principle subject of this thesis work is the design and development of the Detector Control System (DCS) of the ATLAS experiment at CERN. The DCS must ensure the coherent and safe operation of the detector and handle the communication with external systems, like the LHC accelerator and CERN services. A bidirectional data flow between the Data AcQuisition (DAQ) system and the DCS will enable coherent operation of the experiment. The LHC experiments represent new challenges for the design of the control system. The extremely high complexity of the project forces the design of different components of the detector and related systems to be performed well ahead to their use. The long lifetime of the LHC experiments imposes the use of evolving technologies and modular design. The overall dimensions of the detector and the high number of I/O channels call for a control system with processing power distributed all over the facilities of the experiment while keeping a low cost. The environmental conditions require...

  6. The rapid secondary electron imaging system of the proton beam writer at CIBA

    International Nuclear Information System (INIS)

    Udalagama, C.N.B.; Bettiol, A.A.; Kan, J.A. van; Teo, E.J.; Watt, F.

    2007-01-01

    The recent years have witnessed a proliferation of research involving proton beam (p-beam) writing. This has prompted investigations into means of optimizing the process of p-beam writing so as to make it less time consuming and more efficient. One such avenue is the improvement of the pre-writing preparatory procedures that involves beam focusing and sample alignment which is centred on acquiring images of a resolution standard or sample. The conventional mode of imaging used up to now has utilized conventional nuclear microprobe signals that are of a pulsed nature and are inherently slow. In this work, we report the new imaging system that has been introduced, which uses proton induced secondary electrons. This in conjunction with software developed in-house that uses a National Instruments DAQ card with hardware triggering, facilitates large data transfer rates enabling rapid imaging. Frame rates as much as 10 frames/s have been achieved at an imaging resolution of 512 x 512 pixels

  7. A scalable gigabit data acquisition system for calorimeters for linear collider

    CERN Document Server

    Gastaldi, F; Magniette, F; Boudry, V

    2015-01-01

    prototypes of ultra-granular calorimeters for the International Linear Collider (ILC). Our design is generic enough to cope with other applications with some minor adaptations. The DAQ is made up of four different modules, including an optional concentrator. A Detector InterFace (DIF) is placed at one end of the detector elements (SLAB) holding up to 160 ASICs. It is connected by a single HDMI cable which is used to transmit both slow-control and readout data over a serial link 8b/10b encoded characters at 50 Mb/s to the Gigabit Concentrator Card (GDCC). One GDCC controls up to 7 DIFs, distributes the system clock and ASICs configuration, and collects data from them. Each DIFs data packet is encapsulated in Ethernet format and sent out via an optical or copper link. The Data Concentrator Card (DCC) is a multiplexer (1 to 8) that can be optionally inserted between the GDCC and the DIFs, increasing the number of managed ...

  8. A real-time data transmission method based on Linux for physical experimental readout systems

    International Nuclear Information System (INIS)

    Cao Ping; Song Kezhu; Yang Junfeng

    2012-01-01

    In a typical physical experimental instrument, such as a fusion or particle physical application, the readout system generally implements an interface between the data acquisition (DAQ) system and the front-end electronics (FEE). The key task of a readout system is to read, pack, and forward the data from the FEE to the back-end data concentration center in real time. To guarantee real-time performance, the VxWorks operating system (OS) is widely used in readout systems. However, VxWorks is not an open-source OS, which gives it has many disadvantages. With the development of multi-core processor and new scheduling algorithm, Linux OS exhibits performance in real-time applications similar to that of VxWorks. It has been successfully used even for some hard real-time systems. Discussions and evaluations of real-time Linux solutions for a possible replacement of VxWorks arise naturally. In this paper, a real-time transmission method based on Linux is introduced. To reduce the number of transfer cycles for large amounts of data, a large block of contiguous memory buffer for DMA transfer is allocated by modifying the Linux Kernel (version 2.6) source code slightly. To increase the throughput for network transmission, the user software is designed into formation of parallelism. To achieve high performance in real-time data transfer from hardware to software, mapping techniques must be used to avoid unnecessary data copying. A simplified readout system is implemented with 4 readout modules in a PXI crate. This system can support up to 48 MB/s data throughput from the front-end hardware to the back-end concentration center through a Gigabit Ethernet connection. There are no restrictions on the use of this method, hardware or software, which means that it can be easily migrated to other interrupt related applications.

  9. The Trigger and Data Acquisition System for the 8 tower subsystem of the KM3NeT detector

    Energy Technology Data Exchange (ETDEWEB)

    Manzali, M., E-mail: matteo.manzali@cnaf.infn.it [INFN CNAF, Bologna (Italy); Università degli Studi di Ferrara, Ferrara (Italy); Chiarusi, T. [INFN BO, Bologna (Italy); Favaro, M. [INFN BO, Bologna (Italy); INFN CNAF, Bologna (Italy); Università degli Studi di Ferrara, Ferrara (Italy); Giacomini, F. [INFN CNAF, Bologna (Italy); Margiotta, A.; Pellegrino, C. [INFN BO, Bologna (Italy); Dipartimento di Fisica e Astronomia, Università degli Studi di Bologna, Bologna (Italy)

    2016-07-11

    KM3NeT is a deep-sea research infrastructure being constructed in the Mediterranean Sea. It will host a large Cherenkov neutrino telescope that will collect photons emitted along the path of the charged particles produced in neutrino interactions in the vicinity of the detector. The philosophy of the DAQ system of the detector foresees that all data are sent to shore after a proper sampling of the photomultiplier signals. No off-shore hardware trigger is implemented and a software selection of the data is performed with an on-line Trigger and Data Acquisition System (TriDAS) to reduce the large throughput due to the environmental light background. A first version of the TriDAS has been developed to operate a prototype detection unit deployed in March 2013 in the abyssal site of Capo Passero (Sicily, Italy), about 3500 m deep. A revised and improved version has been developed to meet the requirements of the final detector, using new tools and modern design solutions. First installation and scalability tests have been performed at the Bologna Common Infrastructure and results comparable to what expected have been observed.

  10. GPUs for real-time processing in HEP trigger systems (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    Energy Technology Data Exchange (ETDEWEB)

    Lamanna, G; Lamanna, G; Piandani, R [INFN, Pisa (Italy); Ammendola, R [INFN, Rome " Tor Vergata" (Italy); Bauce, M; Giagu, S; Messina, A [University, Rome " Sapienza" (Italy); Biagioni, A; Lonardo, A; Paolucci, P S; Rescigno, M; Simula, F; Vicini, P [INFN, Rome " Sapienza" (Italy); Fantechi, R [CERN, Geneve (Switzerland); Fiorini, M [University and INFN, Ferrara (Italy); Graverini, E; Pantaleo, F; Sozzi, M [University, Pisa (Italy)

    2014-06-11

    We describe a pilot project for the use of Graphics Processing Units (GPUs) for online triggering applications in High Energy Physics (HEP) experiments. Two major trends can be identified in the development of trigger and DAQ systems for HEP experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a pure software selection system (trigger-less). The very innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software both at low- and high-level trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming very attractive. We discuss in details the use of online parallel computing on GPUs for synchronous low-level trigger with fixed latency. In particular we show preliminary results on a first test in the NA62 experiment at CERN. The use of GPUs in high-level triggers is also considered, the ATLAS experiment (and in particular the muon trigger) at CERN will be taken as a study case of possible applications.

  11. GPUs for real-time processing in HEP trigger systems (ACAT2013: 15. international workshop on advanced computing and analysis techniques in physics research)

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R; Biagioni, A; Frezza, O; Cicero, F Lo; Lonardo, A; Messina, A; Paolucci, PS; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P [INFN Roma,P.le A.Moro,2, 00185 Roma (Italy); Deri, L; Sozzi, M; Pantaleo, F [Pisa University, Largo B.Pontecorvo,3, 56127 Pisa (Italy); Fiorini, M [Ferrara University, Via Saragat,1, 44122 Ferrara (Italy); Lamanna, G [INFN Pisa, laro B.Pontecorvo,3, 56127 Pisa (Italy); Collaboration: GAP Collaboration

    2014-06-06

    We describe a pilot project (GAP – GPU Application Project) for the use of GPUs (Graphics processing units) for online triggering applications in High Energy Physics experiments. Two major trends can be identified in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a fully software data selection system ({sup t}rigger-less{sup )}. The innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software not only in high level trigger levels but also in early trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerators in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high energy physics data acquisition and trigger systems is becoming relevant. We discuss in detail the use of online parallel computing on GPUs for synchronous low-level triggers with fixed latency. In particular we show preliminary results on a first test in the CERN NA62 experiment. The use of GPUs in high level triggers is also considered, the CERN ATLAS experiment being taken as a case study of possible applications.

  12. GPUs for real-time processing in HEP trigger systems (ACAT2013: 15. international workshop on advanced computing and analysis techniques in physics research)

    International Nuclear Information System (INIS)

    Ammendola, R; Biagioni, A; Frezza, O; Cicero, F Lo; Lonardo, A; Messina, A; Paolucci, PS; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P; Deri, L; Sozzi, M; Pantaleo, F; Fiorini, M; Lamanna, G

    2014-01-01

    We describe a pilot project (GAP – GPU Application Project) for the use of GPUs (Graphics processing units) for online triggering applications in High Energy Physics experiments. Two major trends can be identified in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a fully software data selection system ( t rigger-less ) . The innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software not only in high level trigger levels but also in early trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerators in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high energy physics data acquisition and trigger systems is becoming relevant. We discuss in detail the use of online parallel computing on GPUs for synchronous low-level triggers with fixed latency. In particular we show preliminary results on a first test in the CERN NA62 experiment. The use of GPUs in high level triggers is also considered, the CERN ATLAS experiment being taken as a case study of possible applications

  13. GPUs for real-time processing in HEP trigger systems (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    International Nuclear Information System (INIS)

    Lamanna, G; Lamanna, G; Piandani, R; Tor Vergata (Italy))" data-affiliation=" (INFN, Rome Tor Vergata (Italy))" >Ammendola, R; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Bauce, M; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Giagu, S; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Messina, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Biagioni, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Lonardo, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Paolucci, P S; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Rescigno, M; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Simula, F; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Vicini, P; Fantechi, R; Fiorini, M; Graverini, E; Pantaleo, F; Sozzi, M

    2014-01-01

    We describe a pilot project for the use of Graphics Processing Units (GPUs) for online triggering applications in High Energy Physics (HEP) experiments. Two major trends can be identified in the development of trigger and DAQ systems for HEP experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a pure software selection system (trigger-less). The very innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software both at low- and high-level trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming very attractive. We discuss in details the use of online parallel computing on GPUs for synchronous low-level trigger with fixed latency. In particular we show preliminary results on a first test in the NA62 experiment at CERN. The use of GPUs in high-level triggers is also considered, the ATLAS experiment (and in particular the muon trigger) at CERN will be taken as a study case of possible applications.

  14. Towards a Cyber Defense Framework for SCADA Systems Based on Power Consumption Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Jimenez, Jarilyn M [ORNL; Chen, Qian [Savannah State University; Nichols, Jeff A. {Cyber Sciences} [ORNL; Calhoun, Chelsea [Savannah State University; Sykes, Summer [Savannah State University

    2017-01-01

    Supervisory control and data acquisition (SCADA) is an industrial automation system that remotely monitor, and control critical infrastructures. SCADA systems are major targets for espionage and sabotage attackers. According to the 2015 Dell security annual threat report, the number of cyber-attacks against SCADA systems has doubled in the past year. Cyber-attacks (i.e., buffer overflow, rootkits and code injection) could cause serious financial losses and physical infrastructure damages. Moreover, some specific cyber-attacks against SCADA systems could become a threat to human life. Current commercial off-the-shelf security solutions are insufficient in protecting SCADA systems against sophisticated cyber-attacks. In 2014 a report by Mandiant stated that only 69% of organizations learned about their breaches from third entities, meaning that these companies lack of their own detection system. Furthermore, these breaches are not detected in real-time or fast enough to prevent further damages. The average time between compromise and detection (for those intrusions that were detected) was 205 days. To address this challenge, we propose an Intrusion Detection System (IDS) that detects SCADA-specific cyber-attacks by analyzing the power consumption of a SCADA device. Specifically, to validate the proposed approach, we chose to monitor in real-time the power usage of a a Programmable Logic Controller (PLC). To this end, we configured the hardware of the tetsbed by installing the required sensors to monitor and collect its power consumption. After that two SCADA-specific cyber-attacks were simulated and TracerDAQ Pro was used to collect the power consumption of the PLC under normal and anomalous scenarios. Results showed that is possible to distinguish between the regular power usage of the PLC and when the PLC was under specific cyber-attacks.

  15. CHICSi--a compact ultra-high vacuum compatible detector system for nuclear reaction experiments at storage rings. III. readout system

    Energy Technology Data Exchange (ETDEWEB)

    Carlen, L.; Foerre, G.; Golubev, P.; Jakobsson, B. E-mail: bo.jakobsson@kosufy.lu.se; Kolozhvari, A.; Marciniewski, P.; Siwek, A.; Veldhuizen, E.J. van; Westerberg, L.; Whitlow, H.J.; Oestby, J.M

    2004-01-11

    (CHICSi) Celsius Heavy Ion Collaboration Si detector system is a high granularity, modular detector telescope array for operation around the cluster-jet target/circulating beam intersection of the CELSIUS storage ring at the The. Svedberg Laboratory in Uppsala, Sweden. It is able to provide identity and momentum vector of up to 100 charged particles and fragments from proton-nucleus and nucleus-nucleus collisions at intermediate energies, 50-1000A MeV. All detector telescopes as well as the major part of electronic readout system are placed inside the target chamber in ultra-high vacuum (UHV, 10{sup -9}-10{sup -7} Pa). This requires Very Large Scale Integrated (VLSI) microchip for the spectroscopic signal processing and the generation and transport of digital control signals. Eighteen telescopes, read out with chip-on-board technique by ceramics Mother Boards (MB) and corresponding 18 microchips are mounted on a 450x45 mm{sup 2} Grand Mother Board (GMB), processed on FR4 glass-fibre material. Each of these 28 GMB units contains a daisy-chain organisation of the VLSI chips and associated protection circuits. Analogue-to-digital conversion of the spectroscopic signals is performed on a board outside the chamber which is connected on one side to a power distribution board, directly attached to a UHV mounting flange, and on the other side to the VME-based data acquisition system (CHICSiDAQ). This in its turn is connected via a fibre-optic link to the general TSL acquisition system (SVEDAQ), and in this way data from auxiliary detector systems, read out in CAMAC mode, can be stored in coincidence with CHICSi data.

  16. CHICSi--a compact ultra-high vacuum compatible detector system for nuclear reaction experiments at storage rings. III. readout system

    International Nuclear Information System (INIS)

    Carlen, L.; Foerre, G.; Golubev, P.; Jakobsson, B.; Kolozhvari, A.; Marciniewski, P.; Siwek, A.; Veldhuizen, E.J. van; Westerberg, L.; Whitlow, H.J.; Oestby, J.M.

    2004-01-01

    (CHICSi) Celsius Heavy Ion Collaboration Si detector system is a high granularity, modular detector telescope array for operation around the cluster-jet target/circulating beam intersection of the CELSIUS storage ring at the The. Svedberg Laboratory in Uppsala, Sweden. It is able to provide identity and momentum vector of up to 100 charged particles and fragments from proton-nucleus and nucleus-nucleus collisions at intermediate energies, 50-1000A MeV. All detector telescopes as well as the major part of electronic readout system are placed inside the target chamber in ultra-high vacuum (UHV, 10 -9 -10 -7 Pa). This requires Very Large Scale Integrated (VLSI) microchip for the spectroscopic signal processing and the generation and transport of digital control signals. Eighteen telescopes, read out with chip-on-board technique by ceramics Mother Boards (MB) and corresponding 18 microchips are mounted on a 450x45 mm 2 Grand Mother Board (GMB), processed on FR4 glass-fibre material. Each of these 28 GMB units contains a daisy-chain organisation of the VLSI chips and associated protection circuits. Analogue-to-digital conversion of the spectroscopic signals is performed on a board outside the chamber which is connected on one side to a power distribution board, directly attached to a UHV mounting flange, and on the other side to the VME-based data acquisition system (CHICSiDAQ). This in its turn is connected via a fibre-optic link to the general TSL acquisition system (SVEDAQ), and in this way data from auxiliary detector systems, read out in CAMAC mode, can be stored in coincidence with CHICSi data

  17. Development of the quality control system of the readout electronics for the large size telescope of the Cherenkov Telescope Array observatory

    Energy Technology Data Exchange (ETDEWEB)

    Konno, Y.; Kubo, H.; Masuda, S. [Department of Physics, Graduate School of Science, Kyoto University, Kyoto (Japan); Paoletti, R.; Poulios, S. [SFTA Department, Physics Section, University of Siena and INFN, Siena (Italy); Rugliancich, A., E-mail: andrea.rugliancich@pi.infn.it [SFTA Department, Physics Section, University of Siena and INFN, Siena (Italy); Saito, T. [Department of Physics, Graduate School of Science, Kyoto University, Kyoto (Japan)

    2016-07-11

    The Cherenkov Telescope Array (CTA) is the next generation VHE γ-ray observatory which will improve the currently available sensitivity by a factor of 10 in the range 100 GeV to 10 TeV. The array consists of different types of telescopes, called large size telescope (LST), medium size telescope (MST) and small size telescope (SST). A LST prototype is currently being built and will be installed at the Observatorio Roque de los Muchachos, island of La Palma, Canary islands, Spain. The readout system for the LST prototype has been designed and around 300 readout boards will be produced in the coming months. In this note we describe an automated quality control system able to measure basic performance parameters and quickly identify faulty boards. - Highlights: • The Dragon Board is part of the DAQ of the LST Cherenkov telescope prototype. • We developed an automated quality control system for the Dragon Board. • We check pedestal, linearity, pulse shape and crosstalk values. • The quality control test can be performed on the production line.

  18. The data acquisition system of the Belle II Pixel Detector

    Science.gov (United States)

    Münchow, D.; Dingfelder, J.; Geßler, T.; Konorov, I.; Kühn, W.; Lange, S.; Lautenbach, K.; Levit, D.; Liu, Z.; Marinas, C.; Schnell, M.; Spruck, B.; Zhao, J.

    2014-08-01

    At the future Belle II experiment the DEPFET (DEPleted Field Effect Transistor) pixel detector will consist of about 8 million channels and is placed as the innermost detector. Because of its small distance to the interaction region and the high luminosity in Belle II, for a trigger rate of about 30 kHz with an estimated occupancy of about 3 % a data rate of about 22 GB/s is expected. Due to the high data rate, a data reduction factor higher than 30 is needed in order to stay inside the specifications of the event builder. The main hardware to reduce the data rate is a xTCA based Compute Node (CN) developed in cooperation between IHEP Beijing and University Giessen. Each node has as main component a Xilinx Virtex-5 FX70T FPGA and is equipped with 2 × 2 GB RAM , GBit Ethernet and 4 × 6.25 Gb/s optical links. An ATCA carrier board is able to hold up to four CN and supplies high bandwidth connections between the four CNs and to the ATCA backplane. To achieve the required data reduction on the CNs, regions of interest (ROI) are used. These regions are calculated in two independent systems by projecting tracks back to the pixel detector. One is the High Level Trigger (HLT) which uses data from the Silicon Vertex Detector (SVD), a silicon strip detector, and outer detectors. The other is the Data Concentrator (DATCON) which calculates ROIs based on SVD data only, in order to get low momentum tracks. With this information, only PXD data inside these ROIs will be forwarded to the event builder, while data outside of these regions will be discarded. First results of the test beam time in January 2014 at DESY with a Belle II vertex detector prototype and full DAQ chain will be presented.

  19. Development of a hybrid MSGC detector for thermal neutron imaging with a MHz data acquisition and histogramming system

    CERN Document Server

    Gebauer, B; Richter, G; Levchanovsky, F V; Nikiforov, A

    2001-01-01

    For thermal neutron imaging at the next generation of high-flux pulsed neutron sources a large area and fourfold segmented, hybrid, low-pressure, two-dimensional position sensitive, microstrip gas chamber detector, fabricated in a multilayer technology on glass substrates, is presently being developed, which utilizes a thin composite sup 1 sup 5 sup 7 Gd/CsI neutron converter. The present article focusses on the readout scheme and the data acquisition (DAQ) system. For position encoding, interpolating and fast multihit delay line based electronics is applied with up to eightfold sub-segmentation per geometrical detector segment. All signals, i.e. position, time-of-flight and pulse-height signals, are fed into deadtime-less 8-channel multihit TDC chips with 120 ps LSB via constant fraction and time-over-threshold discriminators, respectively. The multihit capability is utilized to raise the count rate limit in combination with a sum check algorithm for disentangling pulses from different events. The first vers...

  20. The CBM Experiment at FAIR-New challenges for Front-End Electronics, Data Acquisition and Trigger Systems

    International Nuclear Information System (INIS)

    Mueller, Walter F J

    2006-01-01

    The 'Compressed Baryonic Matter' (CBM) experiment at the new 'Facility for Antiproton and Ion Research' (FAIR) in Darmstadt is designed to study the properties of highly compressed baryonic matter produced in nucleus-nucleus collisions in the 10 to 45 A GeV energy range. One of the key observables is hidden (J/ψ) and open (D 0 , D ± ) charm production. To achieve an adequate sensitivity extremely high interaction rates of up to 10 7 events/second are required, resulting in major technological challenges for the detectors, front-end electronics and data processing. The front-end electronics will be self-triggered, autonomously detect particle hits, and output hit parameter together with a precise absolute time-stamp. Several layers of feature extraction and event selection will reduce the primary data flow of about 1 TByte/sec to a level of 1 GByte/sec. This new architecture avoids many limitations of conventional DAQ/Trigger systems and is for example essential for open charm detection, which requires the reconstruction of displaced vertices, in a high-rate heavy ion environment

  1. ATLAS Detector Interface Group

    CERN Multimedia

    Mapelli, L

    Originally organised as a sub-system in the DAQ/EF-1 Prototype Project, the Detector Interface Group (DIG) was an information exchange channel between the Detector systems and the Data Acquisition to provide critical detector information for prototype design and detector integration. After the reorganisation of the Trigger/DAQ Project and of Technical Coordination, the necessity to provide an adequate context for integration of detectors with the Trigger and DAQ lead to organisation of the DIG as one of the activities of Technical Coordination. Such an organisation emphasises the ATLAS wide coordination of the Trigger and DAQ exploitation aspects, which go beyond the domain of the Trigger/DAQ project itself. As part of Technical Coordination, the DIG provides the natural environment for the common work of Trigger/DAQ and detector experts. A DIG forum for a wide discussion of all the detector and Trigger/DAQ integration issues. A more restricted DIG group for the practical organisation and implementation o...

  2. Spectrally efficient polymer optical fiber transmission

    Science.gov (United States)

    Randel, Sebastian; Bunge, Christian-Alexander

    2011-01-01

    The step-index polymer optical fiber (SI-POF) is an attractive transmission medium for high speed communication links in automotive infotainment networks, in industrial automation, and in home networks. Growing demands for quality of service, e.g., for IPTV distribution in homes and for Ethernet based industrial control networks will necessitate Gigabit speeds in the near future. We present an overview on recent advances in the design of spectrally efficient and robust Gigabit-over-SI-POF transmission systems.

  3. The ALICE silicon pixel detector system

    International Nuclear Information System (INIS)

    Kapusta, S.

    2009-01-01

    front-end to the on-detector electronics are from aluminum. In this thesis, I present my involvement in the ALICE SPD project, I summarize the design, the construction, and the testing phase of the ALICE SPD. My involvement in the ALICE DCS project is also presented. During the past years the ALICE SPD collaboration has carried out four testbeams. e primary objective of these testbeams was the validation of the pixel ASICs, the sensors, the read-out electronics and the online systems - Data Acquisition System (DAQ), Trigger (TRG) and Detector Control System DCS with their so w are and offline as well. e pixel chip and sensor prototypes were studied under different conditions (threshold scan, different inclination angles with respect to the beam, bias voltage scan, etc.). Tests of thick and also thin single chip assemblies and chip ladders as designed to be used in the ALICE experiment were also performed. During and a e r the testbeams I developed so w are to verify the data quality, to merge 2 data pixels offline, to correlate the spatial information from different planes, to run a complex offline analysis of the testbeam data, including hit maps, integrated hit maps, event by event analysis, efficiency, multiplicity, cluster size, etc. e prototype full read-out chain with two ladders, the DAQ, Trigger and DCS online systems with their so w are and also offline code were tested and validated during the testbeams. Configuration, readout and control of the SPD is performed via the Detector Control System DCS. As a member of the ALICE Control Coordination ACC team, I had the opportunity to participate in the design, development, commissioning and operation of this system. I took responsibility for the database systems and developed mechanisms for configuring the Front end Electronics (FERO). e SPD has been used as a working example for other detector groups which adopted this approach. I developed and implemented a mechanism of conditions data archival and participated in

  4. DoPET: an in-treatment monitoring system for proton therapy at 62 MeV

    Science.gov (United States)

    Rosso, V.; Belcari, N.; Bisogni, M. G.; Camarlinghi, N.; Cirrone, G. A. P.; Collini, F.; Cuttone, G.; Del Guerra, A.; Milluzzo, G.; Morrocchi, M.; Raffaele, L.; Romano, F.; Sportelli, G.; Zaccaro, E.

    2016-12-01

    Proton beam radiotherapy is highly effective in treating cancer thanks to its conformal dose deposition. This superior capability in dose deposition has led to a massive growth of the treated patients around the world, raising the need of treatment monitoring systems. An in-treatment PET system, DoPET, was constructed and tested at CATANA beam-line, LNS-INFN in Catania, where 62 MeV protons are used to treat ocular melanoma. The PET technique profits from the beta+ emitters generated by the proton beam in the irradiated body, mainly 15-O and 11-C. The current DoPET prototype consists of two planar 15 cm × 15 cm LYSO-based detector heads. With respect to the previous versions, the system was enlarged and the DAQ up-graded during the years so now also anthropomorphic phantoms, can be fitted within the field of view of the system. To demonstrate the capability of DoPET to detect changes in the delivered treatment plan with respect to the planned one, various treatment plans were used delivering a standard 15 Gy fraction to an anthropomorphic phantom. Data were acquired during and after the treatment delivery up to 10 minutes. When the in-treatment phase was long enough (more than 1 minute), the corresponding activated volume was visible just after the treatment delivery, even if in presence of a noisy background. The after-treatment data, acquired for about 9 minutes, were segmented finding that few minutes are enough to be able to detect changes. These experiments will be presented together with the studies performed with PMMA phantoms where the DoPET response was characterized in terms of different dose rates and in presence of range shifters: the system response is linear up to 16.9 Gy/min and has the ability to see a 1 millimeter range shifter.

  5. DoPET: an in-treatment monitoring system for proton therapy at 62 MeV

    International Nuclear Information System (INIS)

    Rosso, V.; Belcari, N.; Bisogni, M.G.; Camarlinghi, N.; Guerra, A. Del; Morrocchi, M.; Sportelli, G.; Zaccaro, E.; Cirrone, G.A.P.; Cuttone, G.; Milluzzo, G.; Raffaele, L.; Romano, F.; Collini, F.

    2016-01-01

    Proton beam radiotherapy is highly effective in treating cancer thanks to its conformal dose deposition. This superior capability in dose deposition has led to a massive growth of the treated patients around the world, raising the need of treatment monitoring systems. An in-treatment PET system, DoPET, was constructed and tested at CATANA beam-line, LNS-INFN in Catania, where 62 MeV protons are used to treat ocular melanoma. The PET technique profits from the beta+ emitters generated by the proton beam in the irradiated body, mainly 15-O and 11-C. The current DoPET prototype consists of two planar 15 cm × 15 cm LYSO-based detector heads. With respect to the previous versions, the system was enlarged and the DAQ up-graded during the years so now also anthropomorphic phantoms, can be fitted within the field of view of the system. To demonstrate the capability of DoPET to detect changes in the delivered treatment plan with respect to the planned one, various treatment plans were used delivering a standard 15 Gy fraction to an anthropomorphic phantom. Data were acquired during and after the treatment delivery up to 10 minutes. When the in-treatment phase was long enough (more than 1 minute), the corresponding activated volume was visible just after the treatment delivery, even if in presence of a noisy background. The after-treatment data, acquired for about 9 minutes, were segmented finding that few minutes are enough to be able to detect changes. These experiments will be presented together with the studies performed with PMMA phantoms where the DoPET response was characterized in terms of different dose rates and in presence of range shifters: the system response is linear up to 16.9 Gy/min and has the ability to see a 1 millimeter range shifter.

  6. Development of a vision-based pH reading system

    Science.gov (United States)

    Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon

    2015-10-01

    pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.

  7. Operation and control of high power Gyrotrons for ECRH systems in SST-1 and Aditya

    Energy Technology Data Exchange (ETDEWEB)

    Shukla, B.K., E-mail: shukla@ipr.res.in; Bora, D.; Jha, R.; Patel, Jatin; Patel, Harshida; Babu, Rajan; Dhorajiya, Pragnesh; Dalakoti, Shefali; Purohit, Dharmesh

    2016-11-15

    Highlights: • Operation and control of high power Gyrotrons. • Data acquisition and control (DAQ) for Gyrotron system. • Ignitron based crowbar protection. • VME and PXI based systems. - Abstract: The Electron Cyclotron Resonance Heating (ECRH) system is an important heating system for the reliable start-up of tokamak. The 42 GHz and 82.6 GHz ECRH systems are used in tokamaks SST-1 and Aditya to carry out ECRH related experiments. The Gyrotrons are high power microwave tubes used as a source for ECRH systems. The Gyrotron is a delicate microwave tube, which deliver megawatt level power at very high voltage ∼40–50 kV with the current requirement ∼10 A–50 A. The Gyrotrons are associated with the subsystems like: High voltage power supplies (Beam voltage and anode voltage), dedicated crowbar system, magnet, filament and ion pump power supplies, cooling, interlocks and a dedicated data acquisition & control (DAC) system. There are two levels of interlocks used for the protection of Gyrotron: fast interlocks (arcing, beam over current, dI/dt, anode voltage and anode over current etc.) operate within 10 μs and slow interlocks (cooling, filament, silence of Gyrotron, ion pump and magnet currents) operate within 100 ms. Two Gyrotrons (42 GHz/500 kW/500 ms and 82.6 GHz/200 kW/1000 s) have been commissioned on dummy load for full parameters. The 42 GHz ECRH system has been integrated with SST-1 & Aditya tokamak and various experiments have been carried out related to ECRH assisted breakdown and start-up of tokamak at fundamental and second harmonic. These Gyrotrons are operated with VME based data acquisition and control (DAC) system. The DAC system is capable to acquire 64 digital and 32 analog signals. The system is used to monitor & acquire the data and also used for slow interlocks for the protection of Gyrotron. The data acquired from the system are stored online on VME system and after the shot stored in a file in binary format. The MDSPlus, a set of

  8. Conceptual Design, Implementation and Commissioning of Data Acquisition and Control System for Negative Ion Source at IPR

    Science.gov (United States)

    Soni, Jignesh; Yadav, Ratnakar; Gahlaut, A.; Bansal, G.; Singh, M. J.; Bandyopadhyay, M.; Parmar, K. G.; Pandya, K.; Chakraborty, A.

    2011-09-01

    Negative ion Experimental facility has been setup at IPR. The facility consists of a RF based negative ion source (ROBIN)—procured under a license agreement with IPP Garching, as a replica of BATMAN, presently operating in IPP, 100 kW 1 MHz RF generators and a set of low and high voltage power supplies, vacuum system and diagnostics. 35 keV 10A H- beam is expected from this setup. Automated successful operation of the system requires an advanced, rugged, time proven and flexible control system. Further the data generated in the experimental phase needs to be acquired, monitored and analyzed to verify and judge the system performance. In the present test bed, this is done using a combination of PLC based control system and a PXI based data acquisition system. The control system consists of three different Siemens PLC systems viz. (1) S-7 400 PLC as a Master Control, (2) S-7 300 PLC for Vacuum system control and (3) C-7 PLC for RF generator control. Master control PLC directly controls all the subsystems except the Vacuum system and RF generator. The Vacuum system and RF generator have their own dedicated PLCs (S-7 300 and C-7 respectively). Further, these two PLC systems work as a slave for the Master control PLC system. Communication between PLC S-7 400, S-7 300 and central control room computer is done through Industrial Ethernet (IE). Control program and GUI are developed in Siemens Step-7 PLC programming software and Wincc SCADA software, respectively. There are approximately 150 analog and 200 digital control and monitoring signals required to perform complete closed loop control of the system. Since the source floats at high potential (˜35 kV); a combination of galvanic and fiber optic isolation has been implemented. PXI based Data Acquisition system (DAS) is a combination of PXI RT (Real time) system, front end signal conditioning electronics, host system and DAQ program. All the acquisition signals coming from various sub-systems are connected and

  9. Conceptual Design, Implementation and Commissioning of Data Acquisition and Control System for Negative Ion Source at IPR

    International Nuclear Information System (INIS)

    Soni, Jignesh; Gahlaut, A.; Bansal, G.; Parmar, K. G.; Pandya, K.; Chakraborty, A.; Yadav, Ratnakar; Singh, M. J.; Bandyopadhyay, M.

    2011-01-01

    Negative ion Experimental facility has been setup at IPR. The facility consists of a RF based negative ion source (ROBIN)--procured under a license agreement with IPP Garching, as a replica of BATMAN, presently operating in IPP, 100 kW 1 MHz RF generators and a set of low and high voltage power supplies, vacuum system and diagnostics. 35 keV 10A H- beam is expected from this setup. Automated successful operation of the system requires an advanced, rugged, time proven and flexible control system. Further the data generated in the experimental phase needs to be acquired, monitored and analyzed to verify and judge the system performance. In the present test bed, this is done using a combination of PLC based control system and a PXI based data acquisition system. The control system consists of three different Siemens PLC systems viz. (1) S-7 400 PLC as a Master Control, (2) S-7 300 PLC for Vacuum system control and (3) C-7 PLC for RF generator control. Master control PLC directly controls all the subsystems except the Vacuum system and RF generator. The Vacuum system and RF generator have their own dedicated PLCs (S-7 300 and C-7 respectively). Further, these two PLC systems work as a slave for the Master control PLC system. Communication between PLC S-7 400, S-7 300 and central control room computer is done through Industrial Ethernet (IE). Control program and GUI are developed in Siemens Step-7 PLC programming software and Wincc SCADA software, respectively. There are approximately 150 analog and 200 digital control and monitoring signals required to perform complete closed loop control of the system. Since the source floats at high potential (∼35 kV); a combination of galvanic and fiber optic isolation has been implemented. PXI based Data Acquisition system (DAS) is a combination of PXI RT (Real time) system, front end signal conditioning electronics, host system and DAQ program. All the acquisition signals coming from various sub-systems are connected and

  10. Overview of data acquisition and central control system of steady state superconducting Tokamak (SST-1)

    International Nuclear Information System (INIS)

    Pradhan, S.; Mahajan, K.; Gulati, H.K.; Sharma, M.; Kumar, A.; Patel, K.; Masand, H.; Mansuri, I.; Dhongde, J.; Bhandarkar, M.; Chudasama, H.

    2016-01-01

    and software architecture is capable to fulfill the present operation and control requirement of SST-1. The CCS is successfully validated in several operation campaigns of SST-1 since year 2013. The lossless PXI based data acquisition of SST-1 is capable of acquiring around 130 channels of the ranging from 10 KHz to 1 MHz sampling frequency and is capable to acquire the volume of 500 Mbytes of data in each shot. The indigenously developed Matlab based software utility is being used to analyze the data. The complete system is also validated in several operation campaigns of SST-1 since year 2013. This paper will provide the overview of all the above mentioned subsystems of SST-1 DAQ and SST-1 CCS focusing on the design, architecture, performance, lesson learned and future upgrade plans.

  11. Overview of data acquisition and central control system of steady state superconducting Tokamak (SST-1)

    Energy Technology Data Exchange (ETDEWEB)

    Pradhan, S., E-mail: pradhan@ipr.res.in; Mahajan, K.; Gulati, H.K.; Sharma, M.; Kumar, A.; Patel, K.; Masand, H.; Mansuri, I.; Dhongde, J.; Bhandarkar, M.; Chudasama, H.

    2016-11-15

    and software architecture is capable to fulfill the present operation and control requirement of SST-1. The CCS is successfully validated in several operation campaigns of SST-1 since year 2013. The lossless PXI based data acquisition of SST-1 is capable of acquiring around 130 channels of the ranging from 10 KHz to 1 MHz sampling frequency and is capable to acquire the volume of 500 Mbytes of data in each shot. The indigenously developed Matlab based software utility is being used to analyze the data. The complete system is also validated in several operation campaigns of SST-1 since year 2013. This paper will provide the overview of all the above mentioned subsystems of SST-1 DAQ and SST-1 CCS focusing on the design, architecture, performance, lesson learned and future upgrade plans.

  12. Graphics in DAQSIM

    International Nuclear Information System (INIS)

    Wang, C.C.; Booth, A.W.; Chen, Y.M.; Botlo, M.

    1993-06-01

    At the Superconducting Super Collider Laboratory (SSCL) a tool called DAQSIM has been developed to study the behavior of Data Acquisition (DAQ) systems. This paper reports and discusses the graphics used in DAQSIM. DAQSIM graphics includes graphical user interface (GUI), animation, debugging, and control facilities. DAQSIM graphics not only provides a convenient DAQ simulation environment, it also serves as an efficient manager in simulation development and verification

  13. State of art data acquisition system for large volume plasma device

    International Nuclear Information System (INIS)

    Sugandhi, Ritesh; Srivastava, Pankaj; Sanyasi, Amulya Kumar; Srivastav, Prabhakar; Awasthi, Lalit Mohan; Mattoo, Shiban Krishna; Parmar, Vijay; Makadia, Keyur; Patel, Ishan; Shah, Sandeep

    2015-01-01

    The Large volume plasma device (LVPD) is a cylindrical device (ϕ = 2m, L = 3m) dedicated for carrying out investigations on plasma physics problems ranging from excitation of whistler structures to plasma turbulence especially, exploring the linear and nonlinear aspects of electron temperature gradient(ETG) driven turbulence, plasma transport over the entire cross section of LVPD. The machine operates in a pulsed mode with repetition cycle of 1 Hz and acquisition pulse length of duration of 15 ms, presently, LVPD has VXI data acquisition system but this is now in phasing out mode because of non-functioning of its various amplifier stages, expandability and unavailability of service support. The VXI system has limited capabilities to meet new experimental requirements in terms of numbers of channel (16), bit resolutions (8 bit), record length (30K points) and calibration support. Recently, integration of new acquisition system for simultaneous sampling of 40 channels of data, collected over multiple time scales with high speed is successfully demonstrated, by configuring latest available hardware and in-house developed software solutions. The operational feasibility provided by LabVIEW platform is not only for operating DAQ system but also for providing controls to various subsystems associated with the device. The new system is based on PXI express instrumentation bus and supersedes the existing VXI based data acquisition system in terms of instrumentation capabilities. This system has capability to measure 32 signals at 60 MHz sampling frequency and 8 signals with 1.25 GHz with 10 bit and 12 bit resolution capability for amplitude measurements. The PXI based system successfully addresses and demonstrate the issues concerning high channel count, high speed data streaming and multiple I/O modules synchronization. The system consists of chassis (NI 1085), 4 high sampling digitizers (NI 5105), 2 very high sampling digitizers (NI 5162), data streaming RAID drive (NI

  14. Beam diagnostics based on virtual instrument technology for HLS

    International Nuclear Information System (INIS)

    Sun Baogen; Lu Ping; Wang Xiaohui; Wang Baoyun; Wang Junhua; Gu Liming; Fang Jia; Ma Tianji

    2009-01-01

    The paper introduce the beam diagnostics system using virtual instrument technology for Hefei Light Source (HLS), which includes a GPIB bus-based DCCT measurement system to measure the beam DC current and beam life, a VXIbus-based closed orbit measurement system to measure the beam position, a PCIbus-based beam profile measurement system to measure the beam profile and emittance, a GPIB-LAN based bunch length system using photoelectric method, and a Ethernet-based photon beam position measurement system. The software is programmed by LabVIEW, which reduces much developing work. (authors)

  15. Real-time algorithms for JET hard X-ray and gamma-ray profile monitor

    International Nuclear Information System (INIS)

    Fernandes, A.; Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J.; Kiptily, V.; Correia, C.M.B.A.; Gonçalves, B.

    2014-01-01

    Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented

  16. Real-time algorithms for JET hard X-ray and gamma-ray profile monitor

    Energy Technology Data Exchange (ETDEWEB)

    Fernandes, A., E-mail: anaf@ipfn.ist.utl.pt [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Kiptily, V. [EURATOM/CCFE Fusion Association, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Correia, C.M.B.A. [Centro de Instrumentação, Dept. de Física, Universidade de Coimbra, 3004-516 Coimbra (Portugal); Gonçalves, B. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-03-15

    Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented.

  17. A versatile trigger and synchronization module with IEEE1588 capabilities and EPICS support

    International Nuclear Information System (INIS)

    Lopez, J.M.; Ruiz, M.; Borrego, J.; Arcas, G. de; Barrera, E.; Vega, J.

    2010-01-01

    Event timing and synchronization are two key aspects to improve in the implementation of distributed data acquisition (dDAQ) systems such as the ones used in fusion experiments. It is also of great importance the integration of dDAQ in control and measurement networks. This paper analyzes the applicability of the IEEE1588 and EPICS standards to solve these problems, and presents a hardware module implementation based in both of them that allow adding these functionalities to any DAQ. The IEEE1588 standard facilitates the integration of event timing and synchronization mechanisms in distributed data acquisition systems based on IEEE 803.3 (Ethernet). An optimal implementation of such system requires the use of network interface devices which include specific hardware resources devoted to the IEE1588 functionalities. Unfortunately, this is not the approach followed in most of the large number of applications available nowadays. Therefore, most solutions are based in software and use standard hardware network interfaces. This paper presents the development of a hardware module (GI2E) with IEEE1588 capabilities which includes USB, RS232, RS485 and CAN interfaces. This permits to integrate any DAQ element that uses these interfaces in dDAQ systems in an efficient and simple way. The module has been developed with Motorola's Coldfire MCF5234 processor and National Semiconductors's PHY DP83640T, providing it with the possibility to implement the PTP protocol of IEEE1588 by hardware, and therefore increasing its performance over other implementations based in software. To facilitate the integration of the dDAQ system in control and measurement networks the module includes a basic Input/Output Controller (IOC) functionality of the Experimental Physics and Industrial Control System (EPICS) architecture. The paper discusses the implementation details of this module and presents its applications in advanced dDAQ applications in the fusion community.

  18. The Calibration System of the E989 Experiment at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Anastasi, Antonio [Univ. of Messina (Italy)

    2017-01-01

    The muon anomaly aµ is one of the most precise quantity known in physics experimentally and theoretically. The high level of accuracy permits to use the measurement of aµ as a test of the Standard Model comparing with the theoretical calculation. After the impressive result obtained at Brookhaven National Laboratory in 2001 with a total accuracy of 0.54 ppm, a new experiment E989 is under construction at Fermilab, motivated by the diff of aexp SM µ - aµ ~ 3σ. The purpose of the E989 experiment is a fourfold reduction of the error, with a goal of 0.14 ppm, improving both the systematic and statistical uncertainty. With the use of the Fermilab beam complex a statistic of × 21 with respect to BNL will be reached in almost 2 years of data taking improving the statistical uncertainty to 0.1 ppm. Improvement on the systematic error involves the measurement technique of ωa and ωp, the anomalous precession frequency of the muon and the Larmor precession frequency of the proton respectively. The measurement of ωp involves the magnetic field measurement and improvements on this sector related to the uniformity of the field should reduce the systematic uncertainty with respect to BNL from 170 ppb to 70 ppb. A reduction from 180 ppb to 70 ppb is also required for the measurement of ωa; new DAQ, a faster electronics and new detectors and calibration system will be implemented with respect to E821 to reach this goal. In particular the laser calibration system will reduce the systematic error due to gain fl of the photodetectors from 0.12 to 0.02 ppm. The 0.02 ppm limit on systematic requires a system with a stability of 10-4 on short time scale (700 µs) while on longer time scale the stability is at the percent level. The 10-4 stability level required is almost an order of magnitude better than the existing laser calibration system in particle physics, making the calibration system a very challenging item. In addition to the high level

  19. Data Acquisition Software for Experiments at the MAMI-C Tagged Photon Facility

    Science.gov (United States)

    Oussena, Baya; Annand, John

    2013-10-01

    Tagged-photon experiments at Mainz use the electron beam of the MAMI (Mainzer MIcrotron) accelerator, in combination with the Glasgow Tagged Photon Spectrometer. The AcquDAQ DAQ system is implemented in the C + + language and makes use of CERN ROOT software libraries and tools. Electronic hardware is characterized in C + + classes, based on a general purpose class TDAQmodule and implementation in an object-oriented framework makes the system very flexible. The DAQ system provides slow control and event-by-event readout of the Photon Tagger, the Crystal Ball 4-pi electromagnetic calorimeter, central MWPC tracker and plastic-scintillator, particle-ID systems and the TAPS forward-angle calorimeter. A variety of front-end controllers running Linux are supported, reading data from VMEbus, FASTBUS and CAMAC systems. More specialist hardware, based on optical communication systems and developed for the COMPASS experiment at CERN, is also supported. AcquDAQ also provides an interface to configure and control the Mainz programmable trigger system, which uses FPGA-based hardware developed at GSI. Currently the DAQ system runs at data rates of up to 3MB/s and, with upgrades to both hardware and software later this year, we anticipate a doubling of that rate. This work was supported in part by the U.S. DOE Grant No. DE-FG02-99ER41110.

  20. Development and clinical application of a computer-aided real-time feedback system for detecting in-bed physical activities.

    Science.gov (United States)

    Lu, Liang-Hsuan; Chiang, Shang-Lin; Wei, Shun-Hwa; Lin, Chueh-Ho; Sung, Wen-Hsu

    2017-08-01

    Being bedridden long-term can cause deterioration in patients' physiological function and performance, limiting daily activities and increasing the incidence of falls and other accidental injuries. Little research has been carried out in designing effective detecting systems to monitor the posture and status of bedridden patients and to provide accurate real-time feedback on posture. The purposes of this research were to develop a computer-aided system for real-time detection of physical activities in bed and to validate the system's validity and test-retest reliability in determining eight postures: motion leftward/rightward, turning over leftward/rightward, getting up leftward/rightward, and getting off the bed leftward/rightward. The in-bed physical activity detecting system consists mainly of a clinical sickbed, signal amplifier, a data acquisition (DAQ) system, and operating software for computing and determining postural changes associated with four load cell sensing components. Thirty healthy subjects (15 males and 15 females, mean age = 27.8 ± 5.3 years) participated in the study. All subjects were asked to execute eight in-bed activities in a random order and to participate in an evaluation of the test-retest reliability of the results 14 days later. Spearman's rank correlation coefficient was used to compare the system's determinations of postural states with researchers' recordings of postural changes. The test-retest reliability of the system's ability to determine postures was analyzed using the interclass correlation coefficient ICC(3,1). The system was found to exhibit high validity and accuracy (r = 0.928, p system was particularly accurate in detecting motion rightward (90%), turning over leftward (83%), sitting up leftward or rightward (87-93%), and getting off the bed (100%). The test-retest reliability ICC(3,1) value was 0.968 (p system developed in this study exhibits satisfactory validity and reliability in detecting changes in