文档详情

《大规模并行处理器程序设计》.pdf

发布:2018-03-30约4.17万字共页下载文档
文本预览下载声明
In Praise of Programming Massively Parallel Processors: A Hands-on Approach Parallel programming is about performance, for otherwise you’d write a sequential program. For those interested in learning or teaching the topic, a problem is where to find truly parallel hardware that can be dedicated to the task, for it is difficult to see interesting speedups if its shared or only modestly parallel. One answer is graphical processing units GPUs , which can have hundreds of cores and are found in millions of desktop and laptop computers. For those interested in the GPU path to parallel enlightenment, this new book from David Kirk and Wen-mei Hwu is a godsend, as it intro- duces CUDA, a C-like data parallel language, and Tesla, the architecture of the current generation of NVIDIA GPUs. In addition to explaining the language and the architecture, they define the nature of data parallel pro- blems that run well on heterogeneous CPU-GPU hardware. More con- cretely, two detailed case studies demonstrate speedups over CPU-only C programs of 10X to 15X for na?¨ve CUDA code and 45X to 105X for expertly tuned versions. They conclude with a glimpse of the future by describing the next generation of data parallel languages and architectures: OpenCL and the NVIDIA Fermi GPU. This book is a valuable addition to the recently reinvigorated parallel computing literature. David Patterson Director, The Parallel Computing Research Laboratory Pardee Professor of Computer Science, U.C. Berkeley Co-author of Computer Architecture: A Quantitative Approach Written by two teaching pioneers, this book is the definitive practical refer- ence on programming massively parallel processors—a true technological gold mine. The hands-on learning included is cutting-edge, yet very read- able. This is a most rewarding read for students, engineers and scientists interested in supercharging computational resources to solve today’s and tomorrow’s hardest pr
显示全部
相似文档