iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/978-3-030-78713-4_22
Characterizing Containerized HPC Applications Performance at Petascale on CPU and GPU Architectures | SpringerLink
Skip to main content

Characterizing Containerized HPC Applications Performance at Petascale on CPU and GPU Architectures

  • Conference paper
  • First Online:
High Performance Computing (ISC High Performance 2021)

Abstract

Containerization technologies provide a mechanism to encapsulate applications and many of their dependencies, facilitating software portability and reproducibility on HPC systems. However, in order to access many of the architectural features that enable HPC system performance, compatibility between certain components of the container and host is required, resulting in a trade-off between portability and performance. In this work, we discuss our experiences running three state-of-the-art containerization technologies on five leading petascale systems. We present how we build the containers to ensure performance and security and their performance at scale. We ran microbenchmarks at a scale of 6,144 nodes containing 0.35 M MPI processes and baseline the performance of container technologies. We establish the near-native performance and minimal memory overheads by the containerized environments using MILC - a lattice quantum chromodynamics code at 139,968 processes and using VPIC - a 3d electromagnetic relativistic Vector Particle-In-Cell code for modeling kinetic plasmas at 32,768 processes. We demonstrate an on-par performance trend at a large scale on Intel, AMD, and three NVIDIA architectures for both HPC applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Merkel, D.: Docker: Lightweight linux containers for consistent development and deployment. Linux J. 2014(239) (Mar 2014)

    Google Scholar 

  2. Younge, A.J., Pedretti, K., Grant, R.E., Brightwell, R.: A tale of two systems: using containers to deploy HPC applications on supercomputers and clouds. In: IEEE International Conference on Cloud Computing Technology and Science (2017)

    Google Scholar 

  3. Larsson, T.J., Hunold, S., Versaci, F. (eds.): Euro-Par 2015. LNCS, vol. 9233. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48096-0

  4. Arango Gutierrez, C., Dernat, R., Sanabria, J.: Performance evaluation of container-based virtualization for high performance computing environments. Revista UIS Ingenierías 18 (2017)

    Google Scholar 

  5. Xavier, M.G., Neves, M.V., Rossi, F.D., Ferreto, T.C., Lange, T., De Rose, C.A.F.: Performance evaluation of container-based virtualization for high performance computing environments. In: : 2013 21st Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP’21) (2013)

    Google Scholar 

  6. Brayford, D., Vallecorsa, S.: Deploying scientific al networks at petaflop scale on secure large scale HPC production systems with containers. In: Proceedings of the Platform for Advanced Scientific Computing Conference (2020)

    Google Scholar 

  7. Wang, Y., Evans, R.T., Huang, L.: Performant container support for HPC applications. In: Proceedings of the Practice and Experience in Advanced Research Computing on Rise of the Machines (learning)(PEARC 2019) (2019). https://doi.org/10.1145/3332186.3332226

  8. Hu, G., Zhang, Y., Chen, W.: Exploring the performance of singularity for high performance computing scenarios. In: 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS) (2019)

    Google Scholar 

  9. Rudyy, O., Garcia-Gasulla, M., Mantovani, F., Santiago, A., Sirvent, R., Vázquez, M.: Containers in HPC: a scalability and portability study in production biological simulations. In: 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS 2019) (2019). https://doi.org/10.1109/IPDPS.2019.00066

  10. Torrez, A., Randles, T., Priedhorsky, R.: HPC container runtimes have minimal or no performance impact. In: 2019 IEEE/ACM International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC (CANOPIE-HPC) (2019)

    Google Scholar 

  11. Cérin, C., Greneche, N., Menouer, T.: Towards pervasive containerization of HPC job schedulers. In: 2020 IEEE 32nd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD) (2020)

    Google Scholar 

  12. Canon, R.S., Younge, A.: A case for portability and reproducibility of HPC containers. In: 2019 IEEE/ACM International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC (CANOPIE-HPC) (2019)

    Google Scholar 

  13. Bachiega, N.G., Souza, P.S.L., Bruschi, S.M., de Souza, S.: Container-based performance evaluation: a survey and challenges. In: 2018 IEEE International Conference on Cloud Engineering (IC2E) (2018)

    Google Scholar 

  14. Charliecloud documentation. https://hpc.github.io/charliecloud/install.html

  15. Kurtzer, G.M., Sochat, V., Bauer, M.W.: Singularity: Scientific containers for mobility of compute. PLoS ONE 12(5), e0177459 (2017)

    Google Scholar 

  16. Podman. https://podman.io

  17. Intel MPI benchmarks. https://github.com/intel/mpi-benchmarks

  18. The MIMD Lattice Computation (MILC) Collaboration: http://www.physics.utah.edu/~detar/milc (2020). Accessed 26 May 2021

  19. Bowers, K.J., Albright, B.J., Yin, L., Bergen, B., Kwan, T.J.T.: Ultrahigh performance three-dimensional electromagnetic relativistic kinetic plasma simulation. Phys. Plasmas 15(5), 2840133 (2008)

    Google Scholar 

  20. Bowers, K.J., et al.: Advances in petascale kinetic plasma simulation with VPIC and roadrunner. J. Phys. Conf. Ser. 180, 012055 (2009)

    Google Scholar 

  21. Edwards, H.C., Trott, C.R., Sunderland, D.: Kokkos: enabling manycore performance portability through polymorphic memory access patterns. J. Parallel Distrib. Comput. 74(12), 3202–3216 (2014)

    Article  Google Scholar 

  22. Harrell, S.L., et al.: Effective performance portability. In: 2018 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC), pp. 24–36 (2018)

    Google Scholar 

  23. NAS Parallel Benchmarks: https://www.nas.nasa.gov/assets/pdf/techreports/2003/nas-03-002.pdf (2021). Accessed 26 May 2021

  24. IOR: https://github.com/hpc/ior (2021). Accessed 26 May 2021

  25. Stanzione, D., West, J., Evans, R.T., Minyard, T., Ghattas, O., Panda, D.K.: Frontera: the evolution of leadership computing at the National Science Foundation. In: Practice and Experience in Advanced Research Computing (PEARC 2020), pp. 106–111. Association for Computing Machinery, New York, NY (2020). https://doi.org/10.1145/3311790.3396656

  26. McCartney, G., Hacker, T., Yang, B.: Empowering faculty: a campus cyberinfrastructure strategy for research communities. Educ. Rev. (2014)

    Google Scholar 

  27. ibmcom/powerai - docker hub. https://hub.docker.com/r/ibmcom/powerai/

  28. centos–docker hub. https://hub.docker.com/_/centos

  29. Stampede2: https://www.tacc.utexas.edu/systems/stampede2 (2021). Accessed 26 May 2021

  30. Chen, X., Lu, C., Pattabiraman, K.: Predicting job completion times using system logs in supercomputing clusters. In: 2013 43rd Annual IEEE/IFIP Conference on Dependable Systems and Networks Workshop (DSN-W) (2013)

    Google Scholar 

  31. Amvrosiadis, G., Park, J., Ganger, G., Gibson, G.A., Baseman, E., DeBardeleben, N.: Bigger, longer, fewer: what do cluster jobs look like outside google? Technical Report CMU-PDL-17-104, Carnegie Mellon Univedrsity (2017)

    Google Scholar 

  32. Frequently asked questions—singularity. https://singularity.lbl.gov/faq#misc

  33. Singularity and MPI applications: https://sylabs.io/guides/3.3/user-guide/mpi.html (2021). Accessed 26 May 2021

  34. Gabriel, E.: Open MPI: goals, concept, and design of a next generation MPI implementation. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) EuroPVM/MPI 2004. LNCS, vol. 3241, pp. 97–104. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30218-6_19

    Chapter  Google Scholar 

  35. Kurtzer, G.M.: Containers in HPC with singularity (2015)

    Google Scholar 

  36. Panda, D.K., Tomko, K., Schulz, K., Majumdar, A.: The MVAPICH project: evolution and sustainability of an open source production quality MPI Library for HPC. In: International Workshop on Sustainable Software for Science: Practice and Experiences (2013)

    Google Scholar 

Download references

Acknowledgment

This work is supported by UT Austin-Portugal Program, a collaboration between the Portuguese Foundation of Science and Technology and the University of Texas at Austin, award UTA18-001217. Authors would also like to thanks Melyssa Fratkin from TACC for providing valuable feedback, and Preston Smith and Xiao Zhu from Purdue for providing an allocation and support for testing on Purdue’s Bell cluster.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amit Ruhela .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ruhela, A. et al. (2021). Characterizing Containerized HPC Applications Performance at Petascale on CPU and GPU Architectures. In: Chamberlain, B.L., Varbanescu, AL., Ltaief, H., Luszczek, P. (eds) High Performance Computing. ISC High Performance 2021. Lecture Notes in Computer Science(), vol 12728. Springer, Cham. https://doi.org/10.1007/978-3-030-78713-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78713-4_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78712-7

  • Online ISBN: 978-3-030-78713-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics