Corpus ID: 235367770

# SPANet: Generalized Permutationless Set Assignment for Particle Physics using Symmetry Preserving Attention

@article{Shmakov2021SPANetGP,
title={SPANet: Generalized Permutationless Set Assignment for Particle Physics using Symmetry Preserving Attention},
author={Alexander Shmakov and M. J. Fenton and Ta-Wei Ho and Shih-Chieh Hsu and Daniel Whiteson and Pierre Baldi},
journal={ArXiv},
year={2021},
volume={abs/2106.03898}
}
The creation of unstable heavy particles at the Large Hadron Collider is the most direct way to address some of the deepest open questions in physics. Collisions typically produce variable-size sets of observed particles which have inherent ambiguities complicating the assignment of observed particles to the decay products of the heavy particles. Current strategies for tackling these challenges in the physics community ignore the physical symmetries of the decay products and consider all… Expand

#### References

SHOWING 1-10 OF 53 REFERENCES
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks
• Computer Science
• ICML
• 2019
This work presents an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set, and reduces the computation time of self-attention from quadratic to linear in the number of Elements in the set. Expand
DeepPermNet: Visual Permutation Learning
• Computer Science
• 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2017
The utility of DeepPermNet, an end-to-end CNN model for visual permutation learning, is demonstrated on two challenging computer vision problems, namely, (i) relative attributes learning and (ii) self-supervised representation learning. Expand
Jet Substructure Classification in High-Energy Physics with Deep Neural Networks
• Physics
• 2016
At the extreme energies of the Large Hadron Collider, massive particles can be produced at such high velocities that their hadronic decays are collimated and the resulting jets overlap. DeducingExpand
From the bottom to the top—reconstruction of t t̄ events with deep learning
• Physics
• Journal of Instrumentation
• 2019
The reconstruction of top-quark pair-production ($t\bar{t}$) events is a prerequisite for many top-quark measurements. We use a deep neural network, trained with Monte-Carlo simulated events, toExpand
Safety of Quark/Gluon Jet Classification
• Physics
• 2021
The classification of jets as quarkversus gluon-initiated is an important yet challenging task in the analysis of data from high-energy particle collisions and in the search for physics beyond theExpand
Attention is All you Need
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. Expand
Gauge Equivariant Convolutional Networks and the Icosahedral CNN
• Computer Science, Mathematics
• ICML
• 2019
Gauge equivariant convolution using a single conv2d call is demonstrated, making it a highly scalable and practical alternative to Spherical CNNs and demonstrating substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns. Expand
Jet substructure at the Large Hadron Collider: A review of recent advances in theory and machine learning
• Physics
• 2017
Jet substructure has emerged to play a central role at the Large Hadron Collider (LHC), where it has provided numerous innovative new ways to search for new physics and to probe the Standard Model inExpand
opt\_einsum - A Python package for optimizing contraction order for einsum-like expressions
• Computer Science
• J. Open Source Softw.
• 2018
Expressions with many tensors are particularly prevalent in many-body theories such as quantum chemistry, particle physics, and nuclear physics in addition to other fields such as machine learning. Expand
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
• Computer Science
• NAACL
• 2019
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks. Expand