Authors:
Florian Rehm
1
;
2
;
Sofia Vallecorsa
1
;
Vikram Saletore
3
;
Hans Pabst
3
;
Adel Chaibi
3
;
Valeriu Codreanu
4
;
Kerstin Borras
5
;
2
and
Dirk Krücker
5
Affiliations:
1
CERN, Switzerland
;
2
RWTH Aachen University, Germany
;
3
Intel, U.S.A.
;
4
SURFsara, The Netherlands
;
5
DESY, Germany
Keyword(s):
Reduced Precision, Quantization, Convolutions, Generative Adversarial Network Validation, High Energy Physics, Calorimeter Simulations.
Abstract:
Deep learning is finding its way into high energy physics by replacing traditional Monte Carlo simulations. However, deep learning still requires an excessive amount of computational resources. A promising approach to make deep learning more efficient is to quantize the parameters of the neural networks to reduced precision. Reduced precision computing is extensively used in modern deep learning and results to lower execution inference time, smaller memory footprint and less memory bandwidth. In this paper we analyse the effects of low precision inference on a complex deep generative adversarial network model. The use case which we are addressing is calorimeter detector simulations of subatomic particle interactions in accelerator based high energy physics. We employ the novel Intel low precision optimization tool (iLoT) for quantization and compare the results to the quantized model from TensorFlow Lite. In the performance benchmark we gain a speed-up of 1.73x on Intel hardware for
the quantized iLoT model compared to the initial, not quantized, model. With different physics-inspired self-developed metrics, we validate that the quantized iLoT model shows a lower loss of physical accuracy in comparison to the TensorFlow Lite model.
(More)