2.10 Data Acquisition (DAS) and Trigger

The Data Acquisition system (DAS, rather than the more common acronym, DAQ) [51] is responsible for reading out the digitized data from the various parts of the detector and recording the results for subsequent analysis.

The basic problem to be overcome by most data acquisition systems is the difference in rate between beam collisions (in LEP, beam cross-over, or BCO, occurs every 22 or 11 $ \ensuremathbox{\mu\mathrm{s}}$ depending upon whether there are 4 or 8 bunches per beam) and the maximum possible (or desirable) data readout and storage rate (perhaps 20 Hz). The Trigger system [52] overcomes this problem by only reading out the detector if there are indications of a significant interaction.

Four levels of triggering are used in DELPHI. The first level, T1, makes no correlation between detector partitions, and uses information only from detectors with fast readout times: ID, TPC ( $ \theta < 43\ensuremathbox{^\circ}$), OD, FCA, FCB, TOF, HOF, HPC scintillators, EMF, and MUB; so the decision can be made 3.5 $ \ensuremathbox{\mu\mathrm{s}}$ after BCO. Once the TPC, HPC, and MUF drift times are complete the signals from these detectors (available 23 $ \ensuremathbox{\mu\mathrm{s}}$ after BCO) as well as correlations between detectors, can be added to provide the second level trigger, T2, 39 $ \ensuremathbox{\mu\mathrm{s}}$ after BCO. At a nominal luminosity of $ 1.5\E{31}\ \ensuremathbox{\mathrm{cm}}^{-2} \ensuremathbox{\mathrm{s}}^{-1}$, the $ \sim700$ Hz (T1) and $ \sim4.5$ Hz (T2) trigger rates produce dead times of 2% (T1) and 1% (T2). The various T2 components are designed to select tracks, muon signals, calorimeter energy deposits, or Bhabha events. There is considerable redundancy between Trigger components, allowing an accurate determination of the trigger efficiency, though it is barely distinguishable from 100% for hadronic events.

T3 repeats the T2 logic in software using digitized and calibrated data, allowing tighter cuts to be applied. T4 uses a tailored version of the DELPHI reconstruction program, DELANA (see section 2.11.1), to reject events with no track pointing towards the interaction region and no energy deposit in the calorimeters. The T3 and T4 processing occurs asynchronously with respect to BCO, introducing no dead time. Each reduces the data rate by a factor of two, so data is recorded at a rate of about 1 Hz.

The data acquisition performed in the counting rooms in the experimental cavern is based on the Fastbus standard [53]. Each detector partition has its own digitization modules, most with a 4-event Front-End Buffer (FEB) to reduce the loss of events due to chance spurts in the trigger rate. Following a positive T2 signal the FEB data is copied (asynchronously with BCO) to the Crate Event Buffer (CEB) by the Crate Processor (CP) program, running in a Fastbus Intersegment Processor (FIP). The FIP also performs local T3 processing. Data from a number of CEBs are merged by the partition's Local Event Supervisor (LES; also running in a FIP) into the Spy-Event Buffer (SEB; used for local monitoring) and Multi-Event Buffer (MEB).

For events that pass T3, individual partitions' MEB data are combined by the Global Event Supervisor (GES) into the Global Event Buffer (GEB) and transferred to the VAX mainframe on the surface via optical fibre. On the VAX the data is managed by the Model Buffer Manager (MBM) and written to disk by Data Logger processes. T4 processing is performed, and selected events are written to disk and then copied to tape (before 1995: 250 Mb IBM 3480 cartridges, written locally; now 10 Gb Digital Linear Tape (DLT) in the CERN central computer centre).2.7

Tim Adye 2002-11-06