The Industrial Physicis
Loading
past issues contact us reprints TIP home

American Institute of Physics

 

 

Book Review

Parallel Scientific Computing in C++ and MPI: A Seamless Approach to Parallel Algorithms and Their Implementation

George Em Karniadakis and Robert M. Kirby II
Cambridge University Press, New York, 2003
616 pp., $150.00 hb , $55.00 pb (both include CD-ROM)
ISBN 0-521-81754-4 hb , ISBN 0-521-52080-0 pb
Amazon | Barnes & Noble

Reviewed by Babak Makkinejad

see all book reviews

cover shotNew and challenging problems are being encountered in the areas of data mining, bioinformatics, and computational fluid dynamics that require a very large computational capacity. The availability of commodity hardware components such as motherboards and memory chips, together with free software such as Linux, the GNU compilers, and the Message-Passing Interface (in which message-passing is used to control the flow of the computation), has put massively parallel machines such as Beowulf clusters within reach of medium-sized companies and academic departments. As parallel computing continues to merge into the mainstream of computing, it is becoming important for students and professionals to understand the application and analysis of algorithmic paradigms to both the (traditional) sequential model of computing and various parallel models.

Parallel Scientific Computing in C++ and MPI , by George Em Karniadakis and Robert M. Kirby II, is a valiant effort to introduce parallel scientific computing to the student in a unified manner. The textbook offers the student with no previous background in computing three books in one. There is a textbook on the analysis of algorithms, a textbook on parallel programming using MPI 1.x, and an elementary book on programming using a subset of C++ as a better “C”.

Karniadakis is a professor of applied mathematics at Brown University, working on simulations of turbulence in complex geometries. Kirby is an assistant professor of computer science at the University of Utah, specializing in large-scale scientific computing. This textbook, largely based on Karniadakis's courses at Princeton University, Brown University, and MIT over the past 15 years, is thus slanted toward computational fluid dynamics. It is strong as a traditional algorithms-based textbook for an introductory course in numerical analysis at the late-undergraduate or early-graduate level. It examines such core topics as dense and sparse matrix computations, linear systems, finite differences, and fast Fourier transforms. The text assumes a solid technical background including calculus, linear algebra, and differential equations.

The initial chapters explain how and why parallel computing began, present an overview of parallel architectures, and introduce MPI 1.x. The authors follow this by discussing the powerful divide-and-conquer paradigm and develop the basics of each topic, such as root finding and approximation with sequential and MPI-specific implementation details and much useful (but not optimal) C++ code. Chapters 3, 5, and 6 are the heart of the book, where approximation of functions, explicit and implicit discretization, and MPI are discussed in detail. The authors are very careful in establishing the foundation of each algorithm, and considerable care is taken in explaining and estimating the accuracy of each numerical technique, its stability, and its convergence with benchmarks. Each chapter also includes advice on common programming pitfalls, “gotchas,” and exercises. There are, in fact, 162 homework problems throughout the book.

The authors state the following: “Our book treats numerics, parallelism, and programming equally and simultaneously.” They do not achieve their stated purpose of treating these topics equally in their discussions of C++ and MPI 1.x. Readers looking for examples of how encapsulation, inheritance, exception handling, templates, and polymorphism can be used to control the complexity of developing, debugging, maintaining, and tuning parallel software using MPI will not find it in this book. For a more thorough discussion of MPI 1.x from a software development point of view using “C,” one might consult Parallel Programming with MPI by Peter S. Pacheco (Morgan Kaufmann Publishers, 1997). However, neither text treats MPI 2.0 features such as multithreading or C++ bindings for MPI.

In spite of falling short of its ambitious goals, this textbook is useful for those who would like to know how to write parallel programs using MPI or who wish to go beyond such cookbook texts as Numerical Recipes in C++: The Art of Scientific Computing by William H. Press , et al. (Cambridge University Press, 1988).

Biography

Babak Makkinejad, a consultant with EDS, received his Ph.D. in theoretical physics from the University of Michigan in Ann Arbor. He has worked in the areas of computational physics, computer graphics, image processing, and enterprise software development.

  adcalls_sub