iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.5220/0010245002510258
SciTePress - Publication Details
loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Florian Rehm 1 ; 2 ; Sofia Vallecorsa 1 ; Vikram Saletore 3 ; Hans Pabst 3 ; Adel Chaibi 3 ; Valeriu Codreanu 4 ; Kerstin Borras 5 ; 2 and Dirk Krücker 5

Affiliations: 1 CERN, Switzerland ; 2 RWTH Aachen University, Germany ; 3 Intel, U.S.A. ; 4 SURFsara, The Netherlands ; 5 DESY, Germany

Keyword(s): Reduced Precision, Quantization, Convolutions, Generative Adversarial Network Validation, High Energy Physics, Calorimeter Simulations.

Abstract: Deep learning is finding its way into high energy physics by replacing traditional Monte Carlo simulations. However, deep learning still requires an excessive amount of computational resources. A promising approach to make deep learning more efficient is to quantize the parameters of the neural networks to reduced precision. Reduced precision computing is extensively used in modern deep learning and results to lower execution inference time, smaller memory footprint and less memory bandwidth. In this paper we analyse the effects of low precision inference on a complex deep generative adversarial network model. The use case which we are addressing is calorimeter detector simulations of subatomic particle interactions in accelerator based high energy physics. We employ the novel Intel low precision optimization tool (iLoT) for quantization and compare the results to the quantized model from TensorFlow Lite. In the performance benchmark we gain a speed-up of 1.73x on Intel hardware for the quantized iLoT model compared to the initial, not quantized, model. With different physics-inspired self-developed metrics, we validate that the quantized iLoT model shows a lower loss of physical accuracy in comparison to the TensorFlow Lite model. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 173.236.136.203

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Rehm, F. ; Vallecorsa, S. ; Saletore, V. ; Pabst, H. ; Chaibi, A. ; Codreanu, V. ; Borras, K. and Krücker, D. (2021). Reduced Precision Strategies for Deep Learning: A High Energy Physics Generative Adversarial Network Use Case. In Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods - ICPRAM; ISBN 978-989-758-486-2; ISSN 2184-4313, SciTePress, pages 251-258. DOI: 10.5220/0010245002510258

@conference{icpram21,
author={Florian Rehm and Sofia Vallecorsa and Vikram Saletore and Hans Pabst and Adel Chaibi and Valeriu Codreanu and Kerstin Borras and Dirk Krücker},
title={Reduced Precision Strategies for Deep Learning: A High Energy Physics Generative Adversarial Network Use Case},
booktitle={Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods - ICPRAM},
year={2021},
pages={251-258},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010245002510258},
isbn={978-989-758-486-2},
issn={2184-4313},
}

TY - CONF

JO - Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods - ICPRAM
TI - Reduced Precision Strategies for Deep Learning: A High Energy Physics Generative Adversarial Network Use Case
SN - 978-989-758-486-2
IS - 2184-4313
AU - Rehm, F.
AU - Vallecorsa, S.
AU - Saletore, V.
AU - Pabst, H.
AU - Chaibi, A.
AU - Codreanu, V.
AU - Borras, K.
AU - Krücker, D.
PY - 2021
SP - 251
EP - 258
DO - 10.5220/0010245002510258
PB - SciTePress