Navigation Chart

Main Navigation

Page Content

The CBM First-level Event Selector

The CBM Experiment Setup at FAIR
The First-Level Event Selector and Data Flow in CBM

The CBM experiment at the upcoming FAIR accelerator aims to create highest baryon densities in nucleus-nucleus collisions and to explore the properties of super-dense nuclear matter. Event rates of 10 MHz are needed for high-statistics measurements of rare probes, while event selection requires complex global triggers like secondary vertex search. To meet these demands, the CBM experiment uses self-triggered detector front-ends and a data push readout architecture.

The First-level Event Selector (FLES) is the central physics selection system in CBM. It performs on-line a full event reconstruction on the 1 TB/s input data stream. The current architecture foresees a scalable high performance computer, expanding on the experience gained in the ALICE HLT system. In order to achieve the high throughput and computation efficiency, all available computing devices will have to be used, in particular FPGAs at the first stages of the system for hit combination and detection and GPUs for the later track reconstruction.

One particular issue is the orchestration of the data flow in order to avoid unnecessary congestion. The FLES key aspects, as envisioned today are:

  • Input data rate of 1 TB/s (e. g., 1000 10-GB/s links)
  • Event rate of 10 MHz with self-activated sensor readout
  • Scalability from full connectivity but reduced capability to full capability
  • Dynamic and fault tolerant load balancing and flow control in the system
  • Interface to DAQ/front-end via common PCI-express x16 interface module hosting
    • High density FPGA with built-in multi-gigabit serial transceivers
    • On-board de-randomizing and reordering buffers
    • DAQ and FLES functionality in same FPGA
  • Direct output interface to mass storage
  • Possibly required implementation of data flow control network (TagNET), to be defined in R&D phase
  • Use of COTS hardware wherever possible
    • Rack-mount PC
    • GPGPUs as hardware accelerators
    • Commercial networks (InfiniBand QDR as baseline)
  • Use of FPGAs wherever economically applicable (e. g., DAQ interface, cluster finder)
  • Very fast on-line event reconstruction, using SIMD, vectorization and many-core architectures
  • Collaboration support for the development of efficient detector and event reconstruction algorithms as well as physics selection algorithms

 

 

To Navigation Chart
empty

Bottom of Page