Abstract
Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code.
Similar content being viewed by others
References
Gropp W, Lusk E, Skjellum A (1999) Using MPI: portable parallel programming with the message-passing interface. MIT Press, Cambridge, pp 1–395
Arora R, Bangalore P (2009) A framework for raising the level of abstraction of explicit parallelization. In: International conference on software engineering (ICSE) companion 2009, pp 339–342
Kiczales G, Lamping J, Mendhekar A, Maeda C, Lopes C, Loingtier J-M, Irwin J (1997) Aspect-oriented programming. In: ECOOP’97—object-oriented programming, 11th European conference. LNCS, vol 1241, pp 220–242
Skjellum A, Bangalore P, Gray J, Bryant B (2004) Reinventing explicit parallel programming for improved engineering of high performance computing software. In: ICSE 2004 workshop on software engineering for high performance computing system applications
Czarnecki K, Eisenecker U (2000) Generative programming: methods, tools, and applications. Addison-Wesley, Reading
Singh A, Schaeffer J, Szafron D (1998) Experience with parallel programming using code templates. Concurr Pract Exp 10:91–120
Siu S, Singh A (1997) Design patterns for parallel computing using a network of processors. In: International symposium on high performance distributed computing (HPDC’97), pp 293–304
Chadha N (2002) A Java implemented design-pattern-based system for parallel programming. Master of Science thesis, University of Mannitoba
Goswami D, Singh A, Preiss B (2001) Building parallel applications using design patterns. In: Advances in software engineering: topics in comprehension, evolution and evaluation. Springer, New York
Chalabine M, Kessler C (2005) Parallelisation of sequential programs by invasive composition and aspect weaving. In: 6th international workshop on advanced parallel processing technologies (APPT’05). LNCS, pp 131–140
Sobral J (2006) Incrementally developing parallel applications with AspectJ. In: 20th IEEE international parallel & distributed processing symposium (IPDPS’06), p 214b
Rabhi FA, Cai H, Tompsett B (2000) A skeleton-based approach for the design and implementation of distributed virtual environments. In: 5th international symposium on software engineering for parallel and distributed systems. IEEE Computer Society Press, New York, pp 13–20
Arora R, Bangalore P (2008) Using aspect-oriented programming for checkpointing a parallel application. In: Parallel and distributed processing techniques and applications (PDPTA 2008), pp 955–961
Bangalore P (2007) Generating parallel applications for distributed memory systems using aspects, components, and patterns. In: Proceedings of the 6th workshop on aspects, components, and patterns for infrastructure software (ACP4IS ’07), vol 219
El-Ghazawi T, Cantonnet F (2002) UPC performance and potential: a NPB experimental study. In: Proceedings of the 2002 ACM/IEEE conference on supercomputing, pp 1–26
Coarfa C, Dotsenko Y, Eckhardt JL, Mellor-Crummey J (2003) Co-array Fortran performance and potential: an NPB experimental study. In: The 16th international workshop on languages and compilers for parallel computing (LCPC 2003), pp 177–193
Yelick K, Hilfinger P, Graham S, Bonachea D, Su J, Kamil A, Datta K, Colella P, Wen T (2007) Parallel languages and compilers: perspective from the titanium experience. Int J High Perform Comput Appl 21(3):266–290
Bal HE, Kaashoek MF, Tanenbaum AS (1992) Orca: a language for parallel programming of distributed systems. IEEE Trans Softw Eng 18(3):190–205
Feo JT, Cann DC, Oldehoeft RR (1990) A report on the Sisal language project. J Parallel Distrib Comput 10(4):349–366
Steele G (2006) Parallel programming and parallel abstractions in fortress. In: Functional and logic programming, 8th international symposium (FLOPS 2006), proceedings. LNCS, vol 3945, p 1
Mehta P, Amaral JN, Szafron D (2006) Is MPI suitable for a generative design-pattern system? Parallel Comput 32(7–8):616–626
Püschel M, Moura JMF, Singer B, Xiong J, Johnson J, Padua D, Veloso M, Johnson RW (2004) Spiral: a generator for platform-adapted libraries of signal processing algorithms. Int J High Perform Comput Appl 18(1):21–45
Goodale T, Allen G, Lanfermann G, Massó J, Radke T, Seidel E, Shalf J (2003) The cactus framework and toolkit: design and applications. In: High performance computing for computational science (VECPAR 2002), vol 2565/2003, pp 15–36
Catanzaro BC, Fox A, Keutzer K, Patterson D, Su B, Snir M, Olukotun K, Hanrahan P, Chafi H (2010) Ubiquitous parallel computing from Berkeley, Illinois, and Stanford. IEEE MICRO 30(2):41–55
Roychoudhury S, Jouault F, Gray J (2007) Model-based aspect weaver construction. In: 4th international workshop on language engineering (ATEM), held at MODELS 2007, Nashville, TN, pp 117–126
Baxter I (1992) Design maintenance systems. Commun ACM 35(4):73–89
Schordan M, Quinlan D (2003) A source-to-source architecture for user-defined optimizations. In: Proceedings joint modular languages conference. Lecture notes in computer science, pp 214–223
Mernik M, Heering J, Sloane AM (2005) When and how to develop domain-specific languages. ACM Comput Surv 37(4):316–344
AMMA (ATLAS Model Management Architecture) platform. http://wiki.eclipse.org/AMMA
Arora R, Bangalore P, Mernik M (2010) A technique for non-invasive application-level checkpointing. J Supercomput. doi:10.1007/s11227-010-0383-5
Hi-PaL API. http://www.cis.uab.edu/ccl/index.php/Hi-PaL
Quinn M (2004) Parallel programming in C with MPI and OpenMP. McGraw-Hill, New York
Sequential version of Game of Life. http://www.cis.uab.edu/courses/cs432/spring2010/software/life.c
Chung TJ (2002) Computational fluid dynamics, 1st edn. Cambridge University Press, Cambridge
Wilkinson B, Allen M (1998) Parallel programming: techniques and applications using networked workstations. Prentice Hall, New York, pp 1–431
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Arora, R., Bangalore, P. & Mernik, M. Raising the level of abstraction for developing message passing applications. J Supercomput 59, 1079–1100 (2012). https://doi.org/10.1007/s11227-010-0490-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-010-0490-3