default search action
2nd MLHPC@SC 2016: Salt Lake City, UT, USA
- 2nd Workshop on Machine Learning in HPC Environments, MLHPC@SC, Salt Lake City, UT, USA, November 14, 2016. IEEE Computer Society 2016, ISBN 978-1-5090-3882-4
- Nikoli Dryden, Tim Moon, Sam Ade Jacobs, Brian Van Essen:
Communication Quantization for Data-Parallel Training of Deep Neural Networks. 1-8 - Yaohung M. Tsai, Piotr Luszczek, Jakub Kurzak, Jack J. Dongarra:
Performance-Portable Autotuning of OpenCL Kernels for Convolutional Layers of Deep Neural Networks. 9-18 - Janis Keuper, Franz-Josef Pfreundt:
Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability. 19-26 - Miguel Camelo, Jeroen Famaey, Steven Latré:
A Scalable Parallel Q-Learning Algorithm for Resource Constrained Decentralized Computing Environments. 27-35 - Catherine D. Schuman, Adam Disney, Susheela P. Singh, Grant Bruer, J. Parker Mitchell, Aleksander Klibisz, James S. Plank:
Parallel Evolutionary Optimization for Neuromorphic Network Training. 36-46 - Thomas E. Potok, Catherine D. Schuman, Steven R. Young, Robert M. Patton, Federico M. Spedalieri, Jeremy Liu, Ke-Thia Yao, Garrett S. Rose, Gangotree Chakma:
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers. 47-55 - Onkar Bhardwaj, Guojing Cong:
Practical Efficiency of Asynchronous Stochastic Gradient Descent. 56-62
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.