iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-319-70096-0_33
Efficient Learning Algorithm Using Compact Data Representation in Neural Networks | SpringerLink
Skip to main content

Efficient Learning Algorithm Using Compact Data Representation in Neural Networks

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10635))

Included in the following conference series:

  • 8231 Accesses

Abstract

Convolutional neural networks have dramatically improved the prediction accuracy in a wide range of applications, such as vision recognition and natural language processing. However the recent neural networks often require several hundred megabytes of memory for the network parameters, which in turn consume a large amount of energy during computation. In order to achieve better energy efficiency, this work investigates the effects of compact data representation on memory saving for network parameters in artificial neural networks while maintaining comparable accuracy in both training and inference phases. We have studied the dependence of prediction accuracy on the total number of bits for fixed point data representation, using a proper range for synaptic weights. We have also proposed a dictionary based architecture that utilizes a limited number of floating-point entries for all the synaptic weights, with proper initialization and scaling factors to minimize the approximation error. Our experiments using a 5-layer convolutional neural network on Cifar-10 dataset have shown that 8 bits are enough for bit width reduction and dictionary based architecture to achieve 96.0% and 96.5% relative accuracy respectively, compared to the conventional 32-bit floating point.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Horowitz, M.: 1.1 computing’s energy problem (and what we can do about it). In: Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 10–14 (2014)

    Google Scholar 

  2. Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: Proceedings of the 32nd International Conference on Machine Learning, pp. 1737–1746 (2015)

    Google Scholar 

  3. Moons, B., De Brabandere, B., Van Gool, L., Verhelst, M.: Energy-efficient convnets through approximate computing. In: Applications of Computer Vision (WACV), pp. 1–8 (2016)

    Google Scholar 

  4. Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: training deep neural networks with weights and activations constrained to +1 or −1. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 4107–4115. MIT Press, Cambridge (2016)

    Google Scholar 

  5. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with bit width reduction weights and activations (2016). arXiv preprint: arXiv:1609.07061

  6. Chen, W., Wilson, J., Tyree, S., Weinberger, K., Chen, Y.: Compressing neural networks with the hashing trick. In: Proceedings of the 32nd International Conference on Machine Learning, pp. 2285–2294 (2015)

    Google Scholar 

  7. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In: International Conference on Learning Representations (2016)

    Google Scholar 

  8. Lin, Z., Courbariaux, M., Memisevic, R., Bengio, Y.: Neural networks with few multiplications (2015). arXiv preprint: arXiv:1510.03009

  9. Hashemi, S., Anthony, N., Tann, H., Bahar, R.I., Reda, S.: Understanding the impact of precision quantization on the accuracy and energy of neural networks. In: 2017 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1474–1479 (2017)

    Google Scholar 

  10. Cheng, Y., Yu, F.X., Feris, R.S., Kumar, S., Choudhary, A., Chang, S.F.: An exploration of parameter redundancy in deep networks with circulant projections. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2857–2865 (2015)

    Google Scholar 

  11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. MIT Press, Cambridge (2012)

    Google Scholar 

  12. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning, pp. 448–456 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masaya Kibune .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Kibune, M., Lee, M.G. (2017). Efficient Learning Algorithm Using Compact Data Representation in Neural Networks. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10635. Springer, Cham. https://doi.org/10.1007/978-3-319-70096-0_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70096-0_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70095-3

  • Online ISBN: 978-3-319-70096-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics