iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/3-540-35767-X_18
An Adaptive Scheme for Dynamic Parallelization | SpringerLink
Skip to main content

An Adaptive Scheme for Dynamic Parallelization

  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2624))

Abstract

In this paper, we present an adaptive dynamic parallelization scheme which integrates the inspector/executor scheme and the speculation scheme to enhance the capability of a parallelizing compiler and reduce the overhead of dynamic parallelization. Under our scheme, a parallelizing compiler can adaptively apply the inspector/executor scheme or the speculation scheme to a candidate loop that cannot be parallelized statically. We also introduce several techniques which enable dynamic parallelization of certain programs, including SPICE, TRACK and DYFESM in the Perfect Benchmark suite. The experimental results show that our adaptive scheme and techniques are quite effective.

This work is supported in part by the National Science Foundation through grants ACI/ITR-0082834 and CCR-9975309

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. William Blume and Rudolf Eigenmann. Non-Linear and Symbolic Data Dependence Testing. IEEE Trans. on Parallel and Distributed Systems, Vol.9, No.12,pages 1180–1194, December 1998.

    Article  Google Scholar 

  2. Derek Bruening, Srikrishna Devabhaktuni, and Saman Amarasinghe. Softspec: Software-based Speculative Parallelism. 3rd ACM Workshop on Feedback-Directed and Dynamic Optimization (FDDO-3), December 10, 2000, Monterey, California.

    Google Scholar 

  3. D. K. Chen, J. Torrellas, and P. C. Yew. An Efficient Algorithm for the Run-Time Parallelization of Do-Across Loops. In Supercomputing’ 94, pp518–527, November 1994.

    Google Scholar 

  4. Manish Gupta and Rahul Nim. Techniques for Speculative Run-Time Parallelization of Loops. Proceedings of SC’98: High Performance Networking and Computing Conference, November 1998.

    Google Scholar 

  5. M. Gupta, S. Mukhopadhyay, N. Sinha. Automatic Parallelization of Recursive Procedures. Proceedings of International Conference on Parallel Architectures and Compilation Techniques (PACT), October 1999.

    Google Scholar 

  6. Hwansoo Han and Chau-Wen Tseng. A Comparison of Parallelization Techniques for Irregular Reductions. 15th International Conference on Parallel and Distributed Computing(IPDPS’01), San Francisco, CA, April 2001.

    Google Scholar 

  7. S. Midkiff and D. Padua. Compiler algorithms for synchronization. IEEE Trans. on Computers, C-36(12), December 1987.

    Google Scholar 

  8. L. Rauchwerger. Run-Time Parallelization: It’s Time Has Come. Journal of Parallel Computing, Special Issue on Languages and Compilers for Parallel Computers, 24(3–4), 1998, pp527–556.

    MATH  Google Scholar 

  9. L. Rauchwerger and D. Padua. The LRPD Test: Speculative run-time parallelization of loops with privatization and reduction parallelization. IEEE Trans. on Parallel and Distributed Systems, 10(2) pp160–180, February 1999.

    Article  Google Scholar 

  10. J. Saltz, R. Mirchandaney, and K. Crowley. Run-time parallelization and scheduling of loops. IEEE Trans. Comput., 40(5), May 1991.

    Google Scholar 

  11. Michael J. Voss and Rudolf Eigenmann. High-Level Adaptive Program Optimization with ADAPT. In Proc. of PPOPP’01, Symposium on Principles and Practice of Parallel Programming, 2001.

    Google Scholar 

  12. Hao Yu and L. Rauchwerger. Adaptive Reduction Parallelization. Proceedings of the ACM 14th International Conference on Supercomputing, Santa Fe, NM, May 2000.

    Google Scholar 

  13. Hao Yu and L. Rauchwerger. Techniques for Reducing the Overhead of Run-time Parallelization. Proc. of the 9th Int. Conference on Compiler Construction, Berlin, Germany, March 2000.

    Google Scholar 

  14. Binyu Zang. Constructing the Parallelizing Compiler AFT. PhD thesis, Fudan University, P.R. China, April 1999.

    Google Scholar 

  15. C.-Q. Zhu and P.-C. Yew. A synchronization scheme and its application for large multiprocessor systems. In 4th Int. Conf. on Distributed Computing Systems, pp486–493, May 1984.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ding, Y., Li, Z. (2003). An Adaptive Scheme for Dynamic Parallelization. In: Dietz, H.G. (eds) Languages and Compilers for Parallel Computing. LCPC 2001. Lecture Notes in Computer Science, vol 2624. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-35767-X_18

Download citation

  • DOI: https://doi.org/10.1007/3-540-35767-X_18

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-04029-3

  • Online ISBN: 978-3-540-35767-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics