An Innovative Course in Parallel Computing

Main Article Content

Yi Pan

Abstract

An innovative course in parallel computing is described in this paper. Traditional parallel computing courses use either low-level message passing interfaces or high level language directives, but not both, due to limited time. In our course, we use both high-level and low-level language constructs. In this paper, we briefly introduce several language interface standards and discuss why we have chosen to use OpenMP (a high-level language interface) and MPI (a low-level message language interface) in our parallel computing class. Some of the drawbacks of using OpenMP in teaching are identified and we show how these drawbacks are being addressed in the course. Several programming projects and a research/survey project given in our class are also described in detail. Through careful design of the course, we show that students can learn many basic concepts through low-level parallel language interfaces and parallelize real (long) scientific codes in high-level parallel language directives within a short period. This would be impossible to accomplish if only one language had
been taught in the course. Index terms: MPI, OpenMP, parallel computing, parallel language, teaching.

Article Details

Section
Articles