You guys are wayyy over my head with this stuff. I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output. Is that not the case?
At a high level, yes that could be the case but that's taking the birds eye view of things
Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?
Yes and no.
This is where things get fun as it immediately depends on the architecture at hand. I'll deal with AMD's latest GCN because they have opened a lot of docs on how it work.
The basic unit of the GPU, he building block, is the "Compute Unit" or "CU" in their terminology.
The CU itself is made up of a scheduler, 4 groups of 16 SIMD units, a scalar unit, a branch/message unit, local data store, a 4 banks of vector registers, a bank of scalar registers, texture filter units, texture load/store units and an L1 cache.
The scheduler is where the work comes in and where things kick off being complicated right away as it can keep multiple program kernels in flight. A single scheduler can keep up to 2560 threads in flight at once and each cycle can issue up to 5 instructions to the various units from any of the kernels it has in flight.
The work itself is divided up into 'wavefronts', these are a grouping of 64 threads from which will be executing in lock step.
So the work spread is 10 waves of 64 threads spread over the 4 SIMD units.
Each of these waves could come from a different program.
Each clock cycle a SIMD is considered for execution, at which point each wave on that SIMD get a chance to execute an instruction (at most 1) and up to 5 instructions can be issued (Vector ALU, Vector Memory read/write/atomic, Scalar (see below), branch, local data share, export or global data share, a special instruction. Note; more instruction types than can be issued and only one of each type can be issued per clock.
(The scalar unit is it own execution unit in it's own right; the scheduler issues instructions to it but they can be ALU, memory or flow control instructions. Up to 1 per clock can be issued.)
The SIMD units aren't vectored however; to perform a vector operation on a SIMD takes 4 cycles. So if you were doing a vec4 + vec4 on SIMD0 it would take 4 cycles per component before the result was ready and the next instruction can be issued - the work is effectively issued as 4 add instructions across 64 threads run in groups of 16. (However during those 3 cycles the scheduler will be considering SIMD1-3 for execution so work is still being done on the CU).
(For sanity sake however we basically pretend that all 64 threads in a work group execute at the same time; it's basically the same thing from a logical point of view.)
So, in one CU, at any given time, up to 10 programs can be running per SIMD with 40 programs in flight in the CU managing up to 2560 threads of data. This is a theoretical maximum however as it depends on what resources the CU has; the vector register banks are statically allocated so if one program comes along and grabs all of them on one SIMD then no more work can be issued on it until it has been completed. This memory file is 64KB in size which means you have 16384 registers (64KB/4byte) per SIMD, however this is statically shared across all wave fronts so if, for example, you have a program where each thread requires 84 registers the SIMD can only maintain 3 wavefronts in flight as it doesn't have the resources for any more (3x64x84 = 16128, to issue another wavefront from the same kernel would require another 5376 registers it doesn't have space for). (In theory the SIMD could be handed off another program which only required 3 vgprs to work so another wavefront could be launch but in practise that is unlikely.)
(SGPR are also limited across the whole CU as the scalar unit is shared between all SIMDs.)
So, given an easy program flow which is only 64 threads in size.
- Program is handed off to the CU
- CU's scheduler assigns it to a SIMD unit
- Each clock cycle the scheduler looks at a SIMD unit and decides which instructions from which wavefront is executed.
If you have more than 64 threads in the group of work, then this would be broken up and spread across either different SIMDS or different wavefronts in the same SIMD. It will always reside on the same CU however; this is because of memory barriers etc needed to treat the execution as one group.
(The 64 thread limit is useful to know however because if you write code which fits into a wave front then you can assume all 64 threads are at the same place at the same time so you can drop atomic operations for operating on local memory stores etc).
There is also a lot not covered here as the GPU requires you manage the cache yourself for memory read/write operations and there is a lot of complex detail, most of which is hidden by the graphic/compute API of choice which will Do The Right Thing for you.
Of course a GPU isn't made up of just one CU; a R290X for example has 44 CUs which means it can have up to 112,640 work items in flight at once.
Pulling back out from the CU we arrive at the Shader Engine; this is a grouping of N CUs which contain the geometry processor, rasterizer and ROP/render backend units - the GP and Rasterizer push work into the CU; the ROP take 'exports' and do the various graphics blending operations etc to write data out.
Stepping back up from that again we come to the Global Data Store and L2 cache which is shared between all the Shader Engines.
Feeding all of this is the GPU front end which consists of a Graphics Command Processor and Asynchronous Compute Engines (ACEs); AMD GPUs have one GCP and up to 8 ACEs, all of which operate independently of each other. The GCP handles traditional graphics programming tasks (as well as compute), where as the ACEs are only for compute work. While the GCP only handles the graphics queue the ACEs can handle multiple command queues (up to 8 each) meaning that you have 64+ ways of feeding commands into the GPU.
The ACEs can operate out-of-order internally (theoretically allowing you to do task graphs on the GPU) and per-cycle can create a workgroup and dispatch one wavefront from that workgroup to the CUs.
So, a compute flow would be;
- work is presented to GCP or ACE
- workgroup is created and wavefront dispatched to a CU
- CU associates wavefront with SIMD
- each clock cycle a CU looks at a wavefront on a SIMD and dispatches work from it.
Data fetches themselves in the CU are effectively 'raw' pointer based; typically some VGPR or SGPR are used to pass in tables of data, effectively base addresses, at which point the memory can be fetched. (There is a whole L1/L2 cache architecture in place).
There are probably other things I've missed (bank conflicts on Local data store springs to mind...) but keep in mind this is specific to AMD's GCN architecture (and if you want to know more/details then AMD's developer page is a good place to go; white papers and presentations can be found there - even I had to reference one to keep the numbers/details straight in my head).
NV is slightly different and the mobile architectures are going to be very different again (they work on a binned-tiled rendering system so their data flow is different), as are the older GPUS and in a few years probably the newer ones too.