Lindenstruth Group - Research
Our group is involved in a variety of natural and life science research topics with a concentration on the architecture, utilization, and development of high-performance computing. We collaborate with accelerator facilities, such as the LHC at CERN (Geneva, Switzerland) and FAIR at GSI (Darmstadt), where we are responsible for the reading and analysis of data from:
Additionally, we are highly active in the CMMS project, a Frankfurt-wide program for multi-scale modeling, analysis, and simulation of biological processes. Furthermore, we develop and utilize classification algorithms, using standard analysis techniques and machine learning methods.
ALICE (A Large Ion Collider Experiment) is the dedicated heavy-ion experiment of the Large Hadron Collider (LHC) located at the European Centre for Particle Physics (CERN).
The primary aim of ALICE is to characterize the properties of a strongly interacting matter know as the Quark-Gluon Plasma (QGP). In order to study such a state of matter in the laboratory, extreme conditions must be created, which is currently realized via heavy-ion collisions in particle accelerators at very high energies.
The final-state particles produced in heavy-ion collisions stream into the various detectors of ALICE, yielding significant data rates. The HLT system of ALICE, a compute farm composed of 188 nodes, has been processing such collisions since 2009. It utilizes FPGA- and GPU-based algorithms to reconstruct charged-particle trajectories and compresses the data size in real time. The use of such technologies by the HLT pioneered hardware-accelerated real-time computing at the LHC [arXiv:1812.08036].
In 2021 the LHC will begin Run3, with ALICE expected to collect 100 times more data with respect to what was recorded since 2009. The new detector readout will change from triggered to continuous, yielding a heavy-ion collision rate of up to 50 kHz. This increase in statistics requires that the collision events be reconstructed and calibrated synchronously. Additionally, in order to transport the data for permanent storage it is also essential that we go beyond a data-stream compression factor of 20. In order to meet these challenges, the concepts and technologies advanced by the HLT are being studied and tested for such a framework.
At the CBM (Compressed Baryonic Matter) experiment at the FAIR facility will investigate heavy-ion reactions at unprecedented interaction rates. This requires a paradigm shift in the read-out and data acquisition concepts, where self-triggered front-end electronics and free-streaming data will be employed. Events will be selected by a high-performance computer cluster, the First-level Event Selector (FLES), that will perform complete event reconstruction and online analyses.
The FLES network infrastructure has been designed around the results of studies that examined efficient data transport for long-haul connections. First results show that future InfiniBand generation network equipment may provide sufficient reach for distances relevant for CBM. Concerning the event reconstruction, algorithms used for local hit determination in CBM subdetectors have been optimized, significantly reducing the corresponding runtimes.
Currently several detectors have been integrated into a prototype experiment, called mini-CBM, that allows the study of combined and synchronous data taking based on the demonstrator setup for the future FLES. In addition, a prototype for a high-level experiment control system has been implemented, which contains all levels of state machines and particularly the functionality to manage the configuration needed to start a data taking run.
The main objective of CMMS is to gain a broad comprehension of both the complex behavior of organisms and the simple molecular biological processes. The projects is structured to develop integrated theoretical and experimental approaches, perform multi-scale modeling and analysis, and integrate this all into a high-performance computing environment.
Modeling Cell Differentiation (P5 of CMMS):
In our project we will try to understand the cell differentiation process in cyanobacteria. Our collaborators will acquire EM pictures at different stages of the development. Next we develop a segmentation pipeline for these pictures and compare the results to gene expression data (coming from NGS experiments), in order to find correlations between morphology and gene expression. In addition we will develop a CPM (cellular POTTS model) which takes the gen expression data as input and should be able to reproduce the morphology observed in the EM pictures.
Outside of the CMMS project, our group studies the cell migration of cancer cells and uses machine learning algorithms in order to classify molecular subtypes of GC cancer.
High performance computing
The LOEWE-CSC is a heterogeneous supercomputer at Frankfurt Goethe University. It was install in 2010 at the Industrialpark Hoechst. It is composed of 825 compute nodes, thousands of CPU (AMD and Intel) cores, and GPU accelerators.
The design of the LOEWE-CSC can support a vast range of computational challenges, including Lattice-QCD, UrQMD hydrodynamic calculations, and the data reconstruction of high-energy particle physics events.