Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics

Frontiers in Big Data 3 (2021) 44 Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first la...

Full description

Saved in:
Bibliographic Details
Main Authors: Iiyama, Yutaro, Cerminara, Gianluca, Gupta, Abhijay, Kieseler, Jan, Loncar, Vladimir, Pierini, Maurizio, Qasim, Shah Rukh, Rieger, Marcel, Summers, Sioni, Van Onsem, Gerrit, Wozniak, Kinga, Ngadiuba, Jennifer, Di Guglielmo, Giuseppe, Duarte, Javier, Harris, Philip, Rankin, Dylan, Jindariani, Sergo, Liu, Mia, Pedro, Kevin, Tran, Nhan, Kreinar, Edward, Wu, Zhenbin
Format: Journal Article
Language:English
Published: 04-02-2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Frontiers in Big Data 3 (2021) 44 Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than 1$\mu\mathrm{s}$ on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the $\mathtt{hls4ml}$ library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage.
Bibliography:FERMILAB-PUB-20-405-E-SCD
DOI:10.48550/arxiv.2008.03601