indianmouse 4 days ago

As a very early CUDA programmer who participated in the cudacontest from NVidia during 2008 and I believe one of the only entries (I'm not claiming though) to be submitted from India and got a consolation and participation prize of a BlackEdition Card, I can vouch the method which I followed.

- Look up the CUDA Programming Guide from NVidia

- CUDA Programming books from NVidia from developer.nvidia.com/cuda-books-archive link

- Start creating small programs based on the existing implementations (A strong C implementation knowledge is required. So, brush up if needed.)

- Install the required Toolchains, compilers, and I am assuming you have the necessary hardware to play around

- Github links with CUDA projects. Read the code, And now you could use LLM to explain the code in the way you would need

- Start creating smaller, yet parallel programs etc., etc.,

And in about a month or two, you should have enough to start writing CUDA programs.

I'm not aware of the skill / experience levels you have, but whatever it might be, there are plenty of sources and resources available now than it was in 2007/08.

Create a 6-8 weeks of study plan and you should be flying soon!

Hope it helps.

Feel free to comment and I can share whatever I could to guide.

2
hiq 4 days ago

> I am assuming you have the necessary hardware to play around

Can you expand on that? Is it enough to have an nvidia graphic card that's like 5 year old, or do you need something more specific?

adrian_b 3 days ago

A 5-year old card, i.e. an NVIDIA Ampere RTX 30xx from 2020, is perfectly fine.

Even 7-year old cards, i.e. NVIDIA Turing RTX 20xx from 2018, are still acceptable.

Older GPUs than Turing should be avoided, because they lack many capabilities of the newer cards, e.g. "tensor cores", and their support in the newer CUDA toolkits will be deprecated in a not very distant future, but very slowly, so for now you can still create programs for Maxwell GPUs from 10 years ago.

Among the newer GPUs, the RTX 40xx SUPER series (i.e. the SUPER variants, not the original RTX 40xx series) has the best energy efficiency. The newest RTX 50xx GPUs have worse energy efficiency than RTX 40xx SUPER, so they achieve a somewhat higher performance only by consuming a disproportionately greater power. Instead of that, it is better to use multiple RTX 40xx SUPER.

rahimnathwani 4 days ago

I'm not a CUDA programmer, but AIUI:

- you will want to install the latest version of CUDA Toolkit (12.9.1)

- each version of CUDA Toolkit requires the card driver to be above a certain version (e.g. toolkit depends on driver version 576 or above)

- older cards often have recent drivers, e.g. the current version of CUDA Toolkit will work with a GTX 1080, as it has a recent (576.x) driver

slt2021 4 days ago

each nVidia GPU has a certain Compute Capability (https://developer.nvidia.com/cuda-gpus).

Depending on the model and age of your GPU, it will have a certain capability that will be the hard ceiling for what you can program using CUDA

dpe82 4 days ago

When you're just getting started and learning that won't matter though. Any Nvidia card from the last 10 years should be fine.

sanderjd 4 days ago

Recognizing that this won't result in any useful benchmarks, is there a way to emulate an nvidia gpu? In a docker container, for instance?

indianmouse 4 days ago

That is sufficient.

edge17 4 days ago

What environment do you use? Is it still the case that Windows is the main development environment for cuda?