iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-319-20119-1_32
Design Methodology for Optimizing Optical Interconnection Networks in High Performance Systems | SpringerLink
Skip to main content

Design Methodology for Optimizing Optical Interconnection Networks in High Performance Systems

  • Conference paper
  • First Online:
High Performance Computing (ISC High Performance 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9137))

Included in the following conference series:

Abstract

Modern high performance computers connect hundreds of thousands of endpoints and employ thousands of switches. This allows for a great deal of freedom in the design of the network topology. At the same time, due to the sheer numbers and complexity involved, it becomes more challenging to easily distinguish between promising and improper designs. With ever increasing line rates and advances in optical interconnects, there is a need for renewed design methodologies that comprehensively capture the requirements and expose trade-offs expeditiously in this complex design space. We introduce a systematic approach, based on Generalized Moore Graphs, allowing one to quickly gauge the ideal level of connectivity required for a given number of end-points and traffic hypothesis, and to collect insight on the role of the switch radix in the topology cost. Based on this approach, we present a methodology for the identification of Pareto-optimal topologies. We apply our method to a practical case with 25,000 nodes and present the results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Each switch is connected on average to R others with n links, thus nSR ports are occupied in total. As each link connects two ports, the number of links is nSR/2. Since each link is assumed bidirectional, it represents two units of capacity so the total capacity is nSR.

  2. 2.

    Topological distance refers to the number of hops achieved over the topology itself and excludes access and egress hops. A topological distance of 0 reflects the situation where messages are immediately forwarded to their final destination after hitting the first switch.

  3. 3.

    In the Flattened Butterfly topology, all vertices sharing a dimension in the lattice as interconnected (Full-Mesh). In a torus, all these vertices are connected along a ring. In our 2-dimensional construction, vertices sharing a dimension are interconnected by following the structure of the largest known graph for a given diameter and maximum degree.

References

  1. Nikolova, D., Rumley, S., Calhoun, D., Li, Q., Hendry, R., Samadi, P., Bergman, K.: Scaling silicon photonic switch fabrics for data center interconnection networks. Opt. Express 23(2), 1159–1175 (2015)

    Article  Google Scholar 

  2. Lee, B.G., Dupuis, N., Pepeljugoski, P., Schares, L., Budd, R., Bickford, J.R., Schow, C.L.: Silicon photonic switch fabrics in computer communications systems. IEEE Journal of Lightwave Technology (in press)

    Google Scholar 

  3. Kim, J., Dally, W.J., Abts, D.: Flattened butterfly: a cost-efficient topology for high-radix networks. In: Proceedings of the International Symposium on Computer Architecture (ISCA), pp. 126–137 (2007)

    Google Scholar 

  4. Bhatelé, A., Kalé, L.V.: Benefits of topology aware mapping for mesh interconnects. Parallel Program. Lett. 18(4), 549–566 (2008)

    Article  Google Scholar 

  5. Dally, W.J.: Principles and Practices of Interconnection Networks. Morgan Kaufmann, San Francisco (2004)

    Google Scholar 

  6. Leiserson, C.E.: Fat-trees: universal networks for hardware-efficient supercomputing. IEEE Trans. Comput. C-43(10), 892–901 (1985)

    Article  Google Scholar 

  7. Sano, K.: Interconnection Network: Design Space Exploration of Networks for Supercomputers. Sustained Simulation Performance, Springer, 151–161, 2015

    Google Scholar 

  8. Borkar, S.: Role of interconnects in the future of computing. IEEE J. Lightwave Technol. (JLT) 31(24), 3927–3933 (2013)

    Article  Google Scholar 

  9. Rumley, S., et al.: Silicon photonics for exascale systems. IEEE J. Lightwave Technol. (JLT) 33(3), 547–562 (2015)

    Article  Google Scholar 

  10. Bradonjic, M., Saniee, I., Widjaja, I.: Scaling of capacity and reliability in data center networks. In: Proceedings of the SIGMETRICS (2014)

    Google Scholar 

  11. Faanes, G., et al.: Cray cascade: a scalable HPC system based on a Dragonfly network. In: Proceedings of the International Conference on High Performance Computing, Networking Storage and Analysis (SC 2012) (2012)

    Google Scholar 

  12. Singla, A., Hong, C.-Y., Popa, L., Godfrey, P.B.: Jellyfish: networking data centers randomly. In: Proceedings of the USENIX Symposium on Networked Systems Design and Implementation (NSDI 2012) 2012

    Google Scholar 

  13. Besta, M., Hoefler, T.: Slim fly: a cost effective low-diameter network topology. In: Proceedings of the International Conference on High Performance Computing, Networking Storage and Analysis (SC 2014) (2014)

    Google Scholar 

  14. Preparata, F.P., Vuillemin, J.: The cube-connected cycles: a versatile network for parallel computation. Commun. ACM 24(5), 300–309 (1981)

    Article  MathSciNet  Google Scholar 

  15. Bhuyan, L.N., Agrawal, D.P.: Generalized hypercube and hyperbus structures for a computer network. IEEE Trans. Comput. C-33(4), 323–333 (1984)

    Article  Google Scholar 

  16. Pease, M.C.: The indirect binary n-Cube microprocessor array. IEEE Trans. Comput. C-26(5), 478–482 (1977)

    Article  Google Scholar 

  17. Dally, W.J.: Performance analysis of k-ary n-cube interconnection networks. IEEE Trans. Comput. 39(6), 775–785 (1990)

    Article  MathSciNet  Google Scholar 

  18. Benes, V.E.: Optical rearrangeable multistage connecting networks. Bell Syst. Tech. J. 43(4), 1641–1656 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  19. Wittie, L.D.: Communication structures for large networks of microcomputers. IEEE Trans. Comput. C-30(4), 264–273 (1981)

    Article  Google Scholar 

  20. Kim, J., Dally, W.J., Towles, B., Gupta, A.K.: Microarchitecture of a high radix router. In: Proceedings of the International Symposium on Computer Architecture (ISCA), pp. 420–431 (2005)

    Google Scholar 

  21. Scott, S., Abts, D., Kim, J., Dally, W.J.: The blackwidow high-radix clos network. In: Proceedings of the International Symposium on Computer Architecture (ISCA), pp. 16–28 (2006)

    Google Scholar 

  22. Barriuso, R., Knies, A.: 108-Port InfiniBand FDR SwitchX Switch Platform Hardware User Manual (2014)

    Google Scholar 

  23. Li, K., Mu, Y., Li, K., Min, G.: Exchanged crossed cube: a novel interconnection network for parallel computation. IEEE Trans. Parallel Distrib. Syst. (TPDS) 24(11), 2211–2219 (2013)

    Article  Google Scholar 

  24. Von Conta, C.: Torus and other networks as communication networks with up to some hundred points. IEEE Trans. Comput. 32(7), 657–666 (1983)

    Article  Google Scholar 

  25. Akers, S.B., Krishnamurthy, B.: A group-theoretic model for symmetric interconnection networks. IEEE Trans. Comput. 38(4), 555–566 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  26. Hsieh, S.-Y., Hsiao, T.-T.: The k-valent graph: a new family of cayley graphs for interconnection networks. In: Proceedings of the International Conference on Parallel Processing (ICPP) (2004)

    Google Scholar 

  27. McKay, B.D., Miller, M., Sirán, J.: A note on large graphs on diameter two and given maximum degree. J. Comb. 61, 1–63 (1998)

    Google Scholar 

  28. Miller, M., Sirán, J.: Moore graphs and beyond: a survey of the degree/diameter problem. Electronic Journal of Combinatorics, Dynamic Survey D 14 (2005)

    Google Scholar 

  29. Boa, W.-T., et al.: A high-performance and cost-efficient interconnection network for high-density servers. J. Comput. Sci. Technol. 23(2), 281–292 (2014)

    Google Scholar 

  30. Samples, M.: Large networks with small diameter. In: Möhring, R.H. (ed.) WG 1997. LNCS, vol. 1335, pp. 288–302. Springer, Heidelberg (1997)

    Chapter  Google Scholar 

  31. Loz, E., Sirán, J.: New record graphs in the degree-diameter problem. Autralasian J. Comb. 41, 63–80 (2008)

    MATH  Google Scholar 

  32. Koibuchi, M., Matsutani, H., Amano, H., Hsu, D.F., Casanova, H.: A case for random shortcut topologies for HPC interconnects. In: Proceedings of the International Symposium on Computer Architecture (ISCA), pp. 177–188 (2012)

    Google Scholar 

  33. Guan, K.C., Chan, V.W.S.: Cost-efficient fiber connection topology design for metropolitan area WDM networks. IEEE/OSA J. Opt. Commun. Netw. 1(1), 158–175 (2009)

    Article  Google Scholar 

  34. Vetter, J., et al.: Quantifying architectural requirements of contemporary extreme-scale scientific applications. In: High Performance Computing Systems. Performance Modeling, Benchmarking and Simulation, Springer LNCS (2014)

    Google Scholar 

  35. Kandula, S., Sengupta, S., Greenberg, A., Patel, P., Chaiken, R.: The nature of data center traffic: measurements and analysis. In: Proceedings of the ACM Conference on Internet Measurement (IMC), pp. 202–208 (2009)

    Google Scholar 

  36. Hendry, G.: The role of optical links in HPC system interconnects. In: Proceedings of the IEEE Optical Interconnects Conference (2013)

    Google Scholar 

  37. Hammond, S., et al.: Towards a standard architectural simulation framework. In: Proceedings of the Workshop on Modeling & Simulation of Exascale Systems & Applications (2013)

    Google Scholar 

  38. Ajima, Y., Inoue, T., Hiramoto, S., Shimizu, T.: Tofu: interconnect for the K computer. Fujistu Sci. Tech. J. 48(3), 280–285 (2012)

    Google Scholar 

Download references

Acknowledgement

This work has been realized in the context of Department of Energy (DoE) ASCR project “Data Movement Dominates”. It has been partly supported by the U.S. Department of Energy (DoE) National Nuclear Security Administration (NNSA) Advanced Simulation and Computing (ASC) program through contract PO1426332 with Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sébastien Rumley .

Editor information

Editors and Affiliations

Appendix

Appendix

A Generalized Moore Graph can be described as follows. Consider a vertex, V, in any graph of degree R (i.e. whose vertices have never more than R incident links). V cannot have more than R direct neighbors. It also cannot have more than R(R1) neighbors at distance 2 (each of its neighbors have R neighbors but V does not count as it is one of them), and generally cannot have more than R(R1) D1 neighbors at distance D. A GMG is a graph which maximally uses this expansion possibilities offered by the degree R: in a GMG graph, each vertex has exactly R direct neighbors, exactly R(R-1) i1 neighbors at distance i (i = 2..D-1), and all the remaining vertices are at distance D. Figure 8 exemplifies the GMG concept. Because inner layers are maximally filled, there is no way to get a vertex closer without interchanging it with another vertex. This means that no distance between two vertices can be reduced, thus that the average distance in the graph is minimized.

Fig. 8.
figure 8

(a) Maximal expansion possibilities for connectivity/degree R = 3 and three layers. Generalized Moore Graphs follow this structure, except that the last layer does not have to be totally filled (b) Example of Generalized Moore Graph (a 3x3 torus) (c) The 3x3 torus reorganized to show the layers (d) A representation of unfilled slots in the last layer

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Rumley, S., Glick, M., Hammond, S.D., Rodrigues, A., Bergman, K. (2015). Design Methodology for Optimizing Optical Interconnection Networks in High Performance Systems. In: Kunkel, J., Ludwig, T. (eds) High Performance Computing. ISC High Performance 2015. Lecture Notes in Computer Science(), vol 9137. Springer, Cham. https://doi.org/10.1007/978-3-319-20119-1_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-20119-1_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-20118-4

  • Online ISBN: 978-3-319-20119-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics