iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/S11227-014-1200-3
Strategies for maximizing utilization on multi-CPU and multi-GPU heterogeneous architectures | The Journal of Supercomputing Skip to main content
Log in

Strategies for maximizing utilization on multi-CPU and multi-GPU heterogeneous architectures

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

This paper explores the possibility of efficiently executing a single application using multicores simultaneously with multiple GPU accelerators under a parallel task programming paradigm. In particular, we address the challenge of extending a parallel_for template to allow its exploitation on heterogeneous architectures. Due to the asymmetry of the computing resources, we propose in this work a dynamic scheduling strategy coupled with an adaptive partitioning scheme that resizes chunks to prevent underutilization and load imbalance of CPUs and GPUs. In this paper we also address the problem of the underutilization of the CPU core where a host thread operates. To solve it, we propose two different approaches: (1) a collaborative host thread strategy, in which the host thread, instead of busy-waiting for the GPU to complete, it carries out useful chunk processing; and (2) a host thread blocking strategy combined with oversubscription, that delegates on the OS the duty of scheduling threads to available CPU cores in order to guarantee that all cores are doing useful work. Using two benchmarks we evaluate the overhead introduced by our scheduling and partitioning algorithms, finding that it is negligible. We also evaluate the efficiency of the strategies proposed finding that allowing oversubscription controlled by the OS can be beneficial under certain scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Augonnet C, Clet-Ortega J, Thibault S, Namyst R (2010). Data-aware task scheduling on multi-accelerator based platforms. In: Parallel and distributed systems (ICPADS)

  2. Augonnet C, Thibault S, Namyst R, Wacrenier P-A (February 2011) StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurr Comput Pract Exp 23:187–198

  3. Belviranli ME, Bhuyan LN, Gupta R (2013) A dynamic self-scheduling scheme for heterogeneous multiprocessor architectures. ACM Trans Archit Code Optim 9(4):57:1–57:20

    Article  Google Scholar 

  4. Bueno J, Planas J, Duran A, Badia RM, Martorell X, Ayguade E, Labarta J (2012) Productive programming of GPU clusters with OmpSs. In: Proceeding of the IEEE 26th IPDPS

  5. Hart A (2012) The OpenACC programming model. Technical report, Cray Exascale Research Initiative Europe

  6. Kulkarni M, Burtscher M, Cascaval C, Pingali K (2009) Lonestar: a suite of parallel irregular programs. In: International symposium on performance analysis of systems and software (ISPASS’09)

  7. Lima JVF, Gautier T, Maillard N, Danjean V (2012) Exploiting concurrent GPU operations for efficient work stealing on multi-GPUs. In: SBAC-PAD’12, pp 75–82

  8. Luk C-K, Hong S, Kim H (2009) Qilin: exploiting parallelism on heterogeneous multiprocessors with adaptive mapping. In: MICRO-42, pp 45–55

  9. NVIDIA Corporation (2013) CUDA Toolkit Documentation ver.5.5. http://docs.nvidia.com/cuda/index.html. Accessed 20 Nov 2013

  10. Ravi VT, Agrawal G (2011) A dynamic scheduling framework for emerging heterogeneous systems. In: High performance computing (HiPC), pp 1–10

  11. Reinders J (2007) Intel threading building blocks: multi-core parallelism for C++ programming. O’Reilly, USA

    Google Scholar 

  12. Rudolph DC, Polychronopoulos CD (1989) An efficient message-passing scheduler based on guided self scheduling. In: Proceeding of the third international conference on supercomputing, ICS ’89

  13. Russel SA (2012) Levering GPGPU and OpenCL technologies for natural user interaces. You i Labs inc., Canada Technical report

    Google Scholar 

  14. Venkatasubramanian S, Vuduc RW (2009) Tuned and wildly asynchronous stencil kernels for hybrid CPU/GPU systems. In: Procedding of the international conference on supercomputing (ICS’09)

  15. Vilches A, Navarro A, Corbera F, Asenjo R (2004) Strategies for maximizing utilization on multi-CPU & multi-GPU heterogeneous architectures. Technical report, Computer Architecture Department. http://www.ac.uma.es/~asenjo/research/

Download references

Acknowledgments

This material is based on work supported by Spanish projects: TIN2010-16144 from the Ministerio de Ciencia e Innovación, by P08-TIC-3500 and P11-TIC-8144 from the Junta de Andalucía, and by CAPAP-H4 network (TIN2011-15734-E).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafael Asenjo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Navarro, A., Vilches, A., Corbera, F. et al. Strategies for maximizing utilization on multi-CPU and multi-GPU heterogeneous architectures. J Supercomput 70, 756–771 (2014). https://doi.org/10.1007/s11227-014-1200-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-014-1200-3

Keywords

Navigation