Abstract
At the threshold to exascale computing, limitations of the MPI programming model become more and more pronounced. HPC programmers have to design codes that can run and scale on systems with hundreds of thousands of cores. Setting up accordingly many communication buffers, point-to-point communication links, and using bulk-synchronous communication phases is contradicting scalability in these dimensions. Moreover, the reliability of upcoming systems will worsen.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Alrutz, T. et al. (2013). GASPI – A Partitioned Global Address Space Programming Interface. In: Keller, R., Kramer, D., Weiss, JP. (eds) Facing the Multicore-Challenge III. Lecture Notes in Computer Science, vol 7686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35893-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-35893-7_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-35892-0
Online ISBN: 978-3-642-35893-7
eBook Packages: Computer ScienceComputer Science (R0)