Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
GPU Programming with C++ and CUDA

You're reading from   GPU Programming with C++ and CUDA Uncover effective techniques for writing efficient GPU-parallel C++ applications

Arrow left icon
Product type Paperback
Published in Aug 2025
Publisher Packt
ISBN-13 9781805124542
Length 270 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Paulo Motta Paulo Motta
Author Profile Icon Paulo Motta
Paulo Motta
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Understanding Where We Are Heading
2. Introduction to Parallel Programming FREE CHAPTER 3. Setting Up Your Development Environment 4. Hello CUDA 5. Hello Again, but in Parallel 6. Bring It On!
7. A Closer Look into the World of GPUs 8. Parallel Algorithms with CUDA 9. Performance Strategies 10. Moving Forward
11. Overlaying Multiple Operations 12. Exposing Your Code to Python 13. Exploring Existing GPU Models 14. Unlock Your Book’s Exclusive Benefits 15. Other Books You May Enjoy
16. Index

Running multiple GPUs together

As we mentioned in the chapter introduction, having multiple GPUs on the same machine is not a very common setup, due to the great cost. Nevertheless, it is still a form of overlapping computation that we can use to our advantage. We will look in this section at an adaptation of the previous vector matrix multiplication program, in which the problem is divided into two parts and each part is submitted to a different GPU.

We will not be using streams in this program, so as not to confuse the topics. Nor will we carry out any performance measurements, because the system on which the executions are run is based on a PCIe 2.0 bus which is really slow, impacting significantly on the final time results.

The key concept to multi-GPU programming is the use of the cudaSetDevice(int d) function which defines the GPU device that will be addressed until it is called again with a different device identifier.

The kernel to perform vector matrix multiplication...

lock icon The rest of the chapter is locked
Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
GPU Programming with C++ and CUDA
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime
Modal Close icon
Modal Close icon