Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Multiprocessors / CUDA Cores


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 Quat   Members   -  Reputation: 424

Like
0Likes
Like

Posted 30 May 2011 - 06:45 PM

I am confused what a multiprocessor and CUDA core is. An NVIDIA whitepaper on the Fermi architecture writes:

"The 512 CUDA cores are organized in 16 SMS ([streaming multiprocessors]) of 32 cores each."

Is a CUDA core like a processor on its own? In other words, like a dual core CPU has two cores, one SM has 32 cores? So it could process 32 warps (warp = 32 threads) concurrently?

In the CUDA Programming Guide, Figure 1-4 shows thread blocks being distributed over GPU cores. Are these CUDA cores? I always thought thread blocks get distributed over multiprocessors.

Also in one DirectCompute presentation I watched, I heard a reference to a "HW shader unit" and it said that a "thread group lives on a single shader unit." What is a shader unit? Is it a multiprocessor or CUDA core or what?
-----Quat

Sponsor:

#2 Ohforf sake   Members   -  Reputation: 1943

Like
0Likes
Like

Posted 31 May 2011 - 01:03 AM

No, a CUDA core is not a core. The name is a pure marketing thing.

The "Multiprocessors" can be seen as cores, where each multiprocessor has a quite large amount of ALUs (used to be 8 for ComputeCapability 1.x). The ALUs can be compared to the "Execution Units" in the Intel CPUs.
The number of CUDA cores is actually the number of multiprocessors times the number of ALUs per multiprocessor, which is just the total number of ALUs on the GPU.

In terms of programming, the multiprocessors are relatively independent of each other, somewhat like CPU cores. The ALUs of a multiprocessor however can not execute a program by themselves. They need their multiprocessor to execute the program which is why all threads in a warp must(/should) walk the same executation path. As long, as this is the case, the multiprocessor only needs to decode one program, but can have the ALUs do the computations for 8 (or whatever he current ALU count is) of those threads simultaneously. Note that the number of threads, running on a multiprocessor, can be higher than the number of ALUs.


I never used or looked into DirectCompute, but i think "HW shader unit" refers to a multiprocessor.

#3 MJP   Moderators   -  Reputation: 13294

Like
0Likes
Like

Posted 31 May 2011 - 01:21 AM

I am confused what a multiprocessor and CUDA core is. An NVIDIA whitepaper on the Fermi architecture writes:

"The 512 CUDA cores are organized in 16 SMS ([streaming multiprocessors]) of 32 cores each."

Is a CUDA core like a processor on its own? In other words, like a dual core CPU has two cores, one SM has 32 cores? So it could process 32 warps (warp = 32 threads) concurrently?

In the CUDA Programming Guide, Figure 1-4 shows thread blocks being distributed over GPU cores. Are these CUDA cores? I always thought thread blocks get distributed over multiprocessors.

Also in one DirectCompute presentation I watched, I heard a reference to a "HW shader unit" and it said that a "thread group lives on a single shader unit." What is a shader unit? Is it a multiprocessor or CUDA core or what?


A "cuda core" on an Nvidia chip is not at all a "core" like on a CPU, where each core has it's own instruction stream/cache/etc. Basically you can think of it as each multiprocessor having a SIMD unit with N ALU's where all ALU's execute the same instruction, and each Cuda/Compute/CL thread executes on one of those ALU's. So for the Fermi GPU you're talking about, each multiprocessor can execute 32 threads simultaneously which gives you number of multiprocessors x 32 threads executing concurrently on the GPU. The GPU will group threads into groups of 32 (a warp), and each multiprocessor will have multiple warps active on it at the same time. It will then constantly swap out warps, based on the latency of the instructions being executed by a warp. In the case of compute shaders, each thread group will end up getting mapped to several warps which will all execute on the same multiprocessor. It's similar for cuda and CL threads.

For stuff like this, this presentation is the best resource.

#4 Krypt0n   Crossbones+   -  Reputation: 2926

Like
0Likes
Like

Posted 31 May 2011 - 06:33 AM

"The 512 CUDA cores are organized in 16 SMS ([streaming multiprocessors]) of 32 cores each."
Is a CUDA core like a processor on its own?

what they call SMS is what you call "core" on a cpu

what they call "core" is a lane of a SIMD unit on a cpu

means they have 16cores with each SIMD32

In other words, like a dual core CPU has two cores, one SM has 32 cores? So it could process 32 warps (warp = 32 threads) concurrently?



a Warp is an allocation of register on a SMS, they have 1536registers per SMS on fermi, which means the more registers your cuda program uses, the less warps you can have in-flight.

an SMS on Fermi can execute two Warps at the same time, as it actually has two SIMD16 units (some more, depending on which GF1xx chip you look), with a WARP size of 32, it means you have always two of the same instructions. on CPU that would be kinda called HyperThreading or SMT or.., depending on the cpu vendor.

so it depends on what you call "concurrent", either two or 1536/register_usage.















#5 Quat   Members   -  Reputation: 424

Like
0Likes
Like

Posted 31 May 2011 - 10:32 AM

Thanks all, I think I get it. Just to summarize:

Fermi has 16 SM (sort of analogous to 16 CPU cores)

A CUDA core processes one thread. So because a SM has 32 cuda cores, it can process 32 threads at once.
-----Quat




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS