Jump to content

  • Log In with Google      Sign In   
  • Create Account


hyphotetical raw gpu programming

  • You cannot reply to this topic
34 replies to this topic

#1 fir   Members   -  Reputation: -443

Like
0Likes
Like

Posted 11 July 2014 - 11:47 AM

I wonder if it would be possible to run raw assembly on GPU

(maybe I should name it G-cpu-s as those are maybe just

a number of some kind od simplified cpus, or something)

Could maybe some eleborate on this - is gpu just a row

od simplistic cpus ?

How such code would look like - a number of memory spaces then each one

feel fill with assembly code then run them?

 

 



Sponsor:

#2 fastcall22   Crossbones+   -  Reputation: 3976

Like
0Likes
Like

Posted 11 July 2014 - 12:04 PM

CPUs and GPUs are both processors, but they specialize in different areas. GPUs excel in massively parallel processing whereas CPUs excel more in general processing. You might want to look into OpenCL, I think it may have some support for video cards.

WW91J3ZlIGdvdCBhIHNlY3JldCBib251cyBwb2ludCE=


#3 Ravyne   Crossbones+   -  Reputation: 6774

Like
12Likes
Like

Posted 11 July 2014 - 12:37 PM

It depends very much on the internal architecture -- because GPUs have sort of hidden behind the problem they focus on, the underlying silicon has been changed radically even in just the 10 or so years since they've become really programable -- off the top of my head, you had VLIW from AMD with an issue width of 5 and 4 going back to the HD5x00 and HD6x00 series, single-SIMD-per-core before that, and multiple-SIMD-per-core recently in GCN 1.0, and GCN 1.1/2.0 having the same basic architecture as that but with better integration into the system's memory hierarchy; From nVidia, you've had half-sized, double-pumped cores, single 'cores' with very many relatively-independent ALUs, and most recently (maxwell), which shrunk back the number of ALUs per core.

 

Both companies do expose a kind of assembly language for recent GPUs if you look around for it. It is entirely possible to write assembly programs for the GPU or build a compiler that can target them. But the mapping isn't quite as 1-1 as on, say x86 (even x86 you're only talking to a logical 'model' of an x86 CPU and the actual micro-code execution is more RISC-like).

 

If you branch out from PC and look at mobile GPUs that you find in phones and tablets, then you have tiled architectures too. ARM's latest Mali GPU architecture Midgard is something really unique -- where every small-vector ALU is completely execution-independant of any other -- as a consequence, every pixel could go down a different codepath for no penalty at all which is something no other GPU can do. In a normal GPU the penalty for divergent branches (an 'if' where the condition is true for some pixels and false for others) is proportional to the square of the number of divergent branches in the codepath, which can quickly become severe.

 

Then, you have something similar in intel's MIC platform, which was originally going to be a high-end GPU ~5 years ago. The upcoming incarnation of MIC is Knights Landing, which is up to 72 customized x86-64 processors based on the most recent Silvermont Atom core -- its been customized by having x87 floating point chopped off, each physical core runs 4 hyper-threads, and each physical core has 2 512-bit SIMDs, and its got up to 8 GB of on-package RAM using a 512-bit bus giving ~350GB/s bandwidth.

 

Anyhow, I get talking about cool hardware and I start to ramble :) -- Long story short, yes you can do what you want to do today, but the tricky part is that GPUs just aren't organized like a CPU or even a bunch of CPUs (Midgard is the exception, Knights Landing to a lesser extent) and so you can't just expect it to run CPU-style code well. A big part of making code go fast on a GPU is partitioning the problem into managable, cache-and-divergence-coherent chunks, which tends to either be super-straight forward (easy) or requires you to pull the solution entirely apart and put it back together in a different configuration (hard).



#4 fir   Members   -  Reputation: -443

Like
0Likes
Like

Posted 11 July 2014 - 02:39 PM

It depends very much on the internal architecture -- because GPUs have sort of hidden behind the problem they focus on, the underlying silicon has been changed radically even in just the 10 or so years since they've become really programable -- off the top of my head, you had VLIW from AMD with an issue width of 5 and 4 going back to the HD5x00 and HD6x00 series, single-SIMD-per-core before that, and multiple-SIMD-per-core recently in GCN 1.0, and GCN 1.1/2.0 having the same basic architecture as that but with better integration into the system's memory hierarchy; From nVidia, you've had half-sized, double-pumped cores, single 'cores' with very many relatively-independent ALUs, and most recently (maxwell), which shrunk back the number of ALUs per core.

 

Both companies do expose a kind of assembly language for recent GPUs if you look around for it. It is entirely possible to write assembly programs for the GPU or build a compiler that can target them. But the mapping isn't quite as 1-1 as on, say x86 (even x86 you're only talking to a logical 'model' of an x86 CPU and the actual micro-code execution is more RISC-like).

 

If you branch out from PC and look at mobile GPUs that you find in phones and tablets, then you have tiled architectures too. ARM's latest Mali GPU architecture Midgard is something really unique -- where every small-vector ALU is completely execution-independant of any other -- as a consequence, every pixel could go down a different codepath for no penalty at all which is something no other GPU can do. In a normal GPU the penalty for divergent branches (an 'if' where the condition is true for some pixels and false for others) is proportional to the square of the number of divergent branches in the codepath, which can quickly become severe.

 

Then, you have something similar in intel's MIC platform, which was originally going to be a high-end GPU ~5 years ago. The upcoming incarnation of MIC is Knights Landing, which is up to 72 customized x86-64 processors based on the most recent Silvermont Atom core -- its been customized by having x87 floating point chopped off, each physical core runs 4 hyper-threads, and each physical core has 2 512-bit SIMDs, and its got up to 8 GB of on-package RAM using a 512-bit bus giving ~350GB/s bandwidth.

 

Anyhow, I get talking about cool hardware and I start to ramble smile.png -- Long story short, yes you can do what you want to do today, but the tricky part is that GPUs just aren't organized like a CPU or even a bunch of CPUs (Midgard is the exception, Knights Landing to a lesser extent) and so you can't just expect it to run CPU-style code well. A big part of making code go fast on a GPU is partitioning the problem into managable, cache-and-divergence-coherent chunks, which tends to either be super-straight forward (easy) or requires you to pull the solution entirely apart and put it back together in a different configuration (hard).

 

Got a problem understanding that as my knowledge is low - i reread

it few times.. Probably to know how (some) gpu is exactly build i would have to work in some company making them : /

 

But i can show my simple schematic picture of this and ask for some claryfying if possible. So for me GPU world seem to be contained from those parts

 

- Input Vram (containing some textures, geometry and some things)

 

- Output vram kontaining some framebuffers atc

 

- Some CPU's (I dont know nothing about this but i imagine they are something like normal cpu driven by some assembly but maybe this assembly is a bit simpler (?) they also say that they are close to x86

sse assembly (at least by type of registers ? - i dont know

 

- some assembly program (or programs) - it must be some program if

there are cpus - but its total unknown if this is one code ot there are 

many programs - each one for each cpu - are those programs clones of one program or are they different ?

 

this question how those programs look like is the one unknown, other

important unknown is if such hardware (i mean GPU) when executing

all this transformation from input Vram to output vram uses only those

assembly programs and those cpus or has yet some other kind of hardwate thad do some transforms but is not such like cpu+assembly

but some other 'hardware construct ' (maybe some that is hardcoded in transistors not programmable by assembly - if there are such things

Im speculating



#5 SeanMiddleditch   Members   -  Reputation: 3899

Like
4Likes
Like

Posted 11 July 2014 - 04:16 PM

it few times.. Probably to know how (some) gpu is exactly build i would have to work in some company making them : /


Each individual GPU can use an entirely different instruction set, even in the same series of GPU. AMD rather publicly switched from a "VLIW5" to a "VLIW4" architecture recently which necessitates an entirely different architecture (and during the transition, some GPUs they release used the old version and other variations in the same product line used the new version). Even within a broad architecture like AMD's VLIW4, each card my have minor variations in its instruction set that is abstracted by the driver's low-level shader compiler.

Your only sane option is to compile to a hardware-neutral IR like SPIR (https://www.khronos.org/spir) or PTX (which is NVIDIA-specific). SPIR is the intended solution to this problem by the Khronos group that allows for a multitude of languages and APIs to all target GPUs without having to deal with the unstable instruction sets.

Some CPU's (I dont know nothing about this but i imagine they are something like normal cpu driven by some assembly but maybe this assembly is a bit simpler (?) they also say that they are close to x86


GPUs are not like CPUs. They are massive SIMD units. They'd be most similar to doing SSE/AVX/AVX512 coding, except _everything_ is SIMD (memory fetches/stores, comparisons, branches, etc.). A program instance in a GPU is really a collection of ~64 or so cores all running in lockstep. That's why branching in GPU code is so bad; in order for one instance to go down a branch _all_ instances must go down that branch (and ignore the results of doing so on instances that shouldn't be on that branch).

You might want go Google "gpu architecture" or check over those SPIR docs.

#6 MJP   Moderators   -  Reputation: 10243

Like
5Likes
Like

Posted 11 July 2014 - 04:26 PM

There aren't any tools for directly generating and running the raw ISA of a GPU. GPU's are intended be used through drivers and 3D graphics or GPGPU API's, which abstract away a lot of the specifics of the hardware. AMD publicly documents the shader ISA, register set, and command buffer format of their GPU's. With this information you technically have enough information to build your own driver layer for setting up a GPU's registers and memory, issuing commands, and running your raw shader code. However this would be incredibly difficult in practice, and would require a lot of general familiarity with GPU's, your operating system, and the specific GPU that you're targeting. And of course by the time you've finished their might be new GPU's on the market with different ISA's and registers. 



#7 Bregma   Crossbones+   -  Reputation: 4768

Like
1Likes
Like

Posted 11 July 2014 - 04:37 PM

You guys are wayyy over my head with this stuff.  I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output.  Is that not the case?

 

Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?

 

Is there a separate processor that divides up the data and feeds it or controls the main array of processors as appropriate?

 

I mean, I can describe how a traditional CPU works down to the NAND gate level (and possibly further), but I'd be interested in learning about GPU internals more.


Stephen M. Webb
Professional Free Software Developer

#8 fir   Members   -  Reputation: -443

Like
0Likes
Like

Posted 11 July 2014 - 04:58 PM

me too, especially to learn (/discuss most important knowledge)  in an easy way, Some docs harder than intel manuals can be obstacle. Most importand would be to get a picture how this assembly code looks like and how its executed - for example if this is some long linear assembly routine 

like

 

start:

  ..assembly..

  ..assembly..

 

  ..assembly..

  ..assembly..

 

  ..assembly..

end.

 

one long routine

that is given to consume by pack of 64 processors or if this is some other structure,

 

Some parts of pipeline are programmable by client programmer but what with the other parts - are those programmed by some internal assembly code or what? hard to find an answers but it would be interesting to know that



#9 Ravyne   Crossbones+   -  Reputation: 6774

Like
5Likes
Like

Posted 11 July 2014 - 05:39 PM

You guys are wayyy over my head with this stuff.  I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output.  Is that not the case?

 

50,000 foot-view? Yes. Think of it like doing CPU assembly -- you write assembly as if you have this idealized x86 processor with a certain number of registers and a certain CISC-like instruction set. That's what you write, that's what gets stored on your hard disk. But when you sent that to the CPU it does all kinds of crazy transformations to it and executes an entirely different, though equivalent, program at the lower levels. If you were to write a GPU program in SPUR or PTX, which is as close to GPU assembly language as it practical, its the same situation except that a) you might be executing on wildely different underlying architectures, and that there are various perf consequences of such, and b) the logical leap between SPUR/PTX and GPU silicon is probably an order of magnitude more removed than between x86 assembly and your CPU silicon.

 

Furthermore, almost nothing on a GPU behaves as a CPU does -- not caching, not branching, not latency, not throughput -- good news though, the same old math works (except when it doesn't :) )

 

 

Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?

 

Is there a separate processor that divides up the data and feeds it or controls the main array of processors as appropriate?

 

In every programmable GPU I'm aware of, with the exception of ARM's new Midgard and Intel's MIC if we're including it, yes -- there's always lock-step execution. Sticking just to recent architectures, AMD's GCN compute block has 4 16-wide SIMD ALUs (in typical 4-vector code, each would correspond to x, y, z, and w), and there's only one program counter per block, IIRC. This is where my own knowlege starts to get a bit fuzzy unless I'm looking at docs, but the take-away is that you're certainly lock-step with 16 physically, and I think the full 64 ALUs as more of a practical matter.

 

At a higher level, you can put different workloads on different compute blocks, and your GPU has between 4 and ~48 of those. The workloads you send the GPU are meted out to the blocks by a higher-level unit. IIRC, in the past this unit could only handle two workloads at a time, but the most recent GCN cards can handle 8 -- You can think of them a bit like hyperthreads -- mostly the duplication is there so that free compute blocks don't go to waste -- and the workloads themselves are very short-lived, typically. The increase from 2-8 workloads is possibly even a bit premature from a client-code perspective. I think its just precipitating that soon GPUs will be able to issue very-small-grained sub-workloads to themselves -- the workloads that come across the PCIe bus are big enough to not really need more than the two.



#10 Ravyne   Crossbones+   -  Reputation: 6774

Like
3Likes
Like

Posted 11 July 2014 - 05:50 PM

Here are some good resources to read:

 

Background: How GPUs work

 

Midgard Architecture Explored



#11 phantom   Moderators   -  Reputation: 6798

Like
13Likes
Like

Posted 11 July 2014 - 07:04 PM

You guys are wayyy over my head with this stuff.  I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output.  Is that not the case?


At a high level, yes that could be the case but that's taking the birds eye view of things smile.png
 

Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?


Yes and no.

This is where things get fun as it immediately depends on the architecture at hand. I'll deal with AMD's latest GCN because they have opened a lot of docs on how it work.

The basic unit of the GPU, he building block, is the "Compute Unit" or "CU" in their terminology.

The CU itself is made up of a scheduler, 4 groups of 16 SIMD units, a scalar unit, a branch/message unit, local data store, a 4 banks of vector registers, a bank of scalar registers, texture filter units, texture load/store units and an L1 cache.

The scheduler is where the work comes in and where things kick off being complicated right away as it can keep multiple program kernels in flight. A single scheduler can keep up to 2560 threads in flight at once and each cycle can issue up to 5 instructions to the various units from any of the kernels it has in flight.

The work itself is divided up into 'wavefronts', these are a grouping of 64 threads from which will be executing in lock step.

So the work spread is 10 waves of 64 threads spread over the 4 SIMD units.
Each of these waves could come from a different program.

Each clock cycle a SIMD is considered for execution, at which point each wave on that SIMD get a chance to execute an instruction (at most 1) and up to 5 instructions can be issued (Vector ALU, Vector Memory read/write/atomic, Scalar (see below), branch, local data share, export or global data share, a special instruction. Note; more instruction types than can be issued and only one of each type can be issued per clock.

(The scalar unit is it own execution unit in it's own right; the scheduler issues instructions to it but they can be ALU, memory or flow control instructions. Up to 1 per clock can be issued.)

The SIMD units aren't vectored however; to perform a vector operation on a SIMD takes 4 cycles. So if you were doing a vec4 + vec4 on SIMD0 it would take 4 cycles per component before the result was ready and the next instruction can be issued - the work is effectively issued as 4 add instructions across 64 threads run in groups of 16. (However during those 3 cycles the scheduler will be considering SIMD1-3 for execution so work is still being done on the CU).
(For sanity sake however we basically pretend that all 64 threads in a work group execute at the same time; it's basically the same thing from a logical point of view.)

So, in one CU, at any given time, up to 10 programs can be running per SIMD with 40 programs in flight in the CU managing up to 2560 threads of data. This is a theoretical maximum however as it depends on what resources the CU has; the vector register banks are statically allocated so if one program comes along and grabs all of them on one SIMD then no more work can be issued on it until it has been completed. This memory file is 64KB in size which means you have 16384 registers (64KB/4byte) per SIMD, however this is statically shared across all wave fronts so if, for example, you have a program where each thread requires 84 registers the SIMD can only maintain 3 wavefronts in flight as it doesn't have the resources for any more (3x64x84 = 16128, to issue another wavefront from the same kernel would require another 5376 registers it doesn't have space for). (In theory the SIMD could be handed off another program which only required 3 vgprs to work so another wavefront could be launch but in practise that is unlikely.)
(SGPR are also limited across the whole CU as the scalar unit is shared between all SIMDs.)

So, given an easy program flow which is only 64 threads in size.
- Program is handed off to the CU
- CU's scheduler assigns it to a SIMD unit
- Each clock cycle the scheduler looks at a SIMD unit and decides which instructions from which wavefront is executed.

If you have more than 64 threads in the group of work, then this would be broken up and spread across either different SIMDS or different wavefronts in the same SIMD. It will always reside on the same CU however; this is because of memory barriers etc needed to treat the execution as one group.
(The 64 thread limit is useful to know however because if you write code which fits into a wave front then you can assume all 64 threads are at the same place at the same time so you can drop atomic operations for operating on local memory stores etc).

There is also a lot not covered here as the GPU requires you manage the cache yourself for memory read/write operations and there is a lot of complex detail, most of which is hidden by the graphic/compute API of choice which will Do The Right Thing for you.

Of course a GPU isn't made up of just one CU; a R290X for example has 44 CUs which means it can have up to 112,640 work items in flight at once.

Pulling back out from the CU we arrive at the Shader Engine; this is a grouping of N CUs which contain the geometry processor, rasterizer and ROP/render backend units - the GP and Rasterizer push work into the CU; the ROP take 'exports' and do the various graphics blending operations etc to write data out.

Stepping back up from that again we come to the Global Data Store and L2 cache which is shared between all the Shader Engines.

Feeding all of this is the GPU front end which consists of a Graphics Command Processor and Asynchronous Compute Engines (ACEs); AMD GPUs have one GCP and up to 8 ACEs, all of which operate independently of each other. The GCP handles traditional graphics programming tasks (as well as compute), where as the ACEs are only for compute work. While the GCP only handles the graphics queue the ACEs can handle multiple command queues (up to 8 each) meaning that you have 64+ ways of feeding commands into the GPU.

The ACEs can operate out-of-order internally (theoretically allowing you to do task graphs on the GPU) and per-cycle can create a workgroup and dispatch one wavefront from that workgroup to the CUs.

So, a compute flow would be;
- work is presented to GCP or ACE
- workgroup is created and wavefront dispatched to a CU
- CU associates wavefront with SIMD
- each clock cycle a CU looks at a wavefront on a SIMD and dispatches work from it.

Data fetches themselves in the CU are effectively 'raw' pointer based; typically some VGPR or SGPR are used to pass in tables of data, effectively base addresses, at which point the memory can be fetched. (There is a whole L1/L2 cache architecture in place).

There are probably other things I've missed (bank conflicts on Local data store springs to mind...) but keep in mind this is specific to AMD's GCN architecture (and if you want to know more/details then AMD's developer page is a good place to go; white papers and presentations can be found there - even I had to reference one to keep the numbers/details straight in my head).

NV is slightly different and the mobile architectures are going to be very different again (they work on a binned-tiled rendering system so their data flow is different), as are the older GPUS and in a few years probably the newer ones too.

#12 dave j   Members   -  Reputation: 581

Like
2Likes
Like

Posted 12 July 2014 - 04:29 AM

For the gory details of a relatively simple[1] GPU, Broadcom have released documentation for the Raspberry Pi's GPU.


[1] Simpler than the AMD GCN described by phantom at least.

#13 Ohforf sake   Members   -  Reputation: 1478

Like
1Likes
Like

Posted 12 July 2014 - 05:17 AM

I mean, I can describe how a traditional CPU works down to the NAND gate level (and possibly further), but I'd be interested in learning about GPU internals more.

Phantom pretty much described, how it works (in the current generation), but to give a very basic comparison to CPUs:

Take your i7 CPU: It has (amongst other things) various caches, scalar and vectorized 8-wide ALUs, 4 cores and SMT (intel calls it "hyperthreading") that allows for 2 threads per core.
Now strip out the scalar ALUs, ramp up the vectorized ALUs from 8-wide to 32-wide and increase their number, allow the SMT to run 64 instead of 2 "threads"/warps/wavefronts per core (note that on GPUs, every SIMD lane is called a thread) and put in 8 of those cores instead of just 4. Then increase all ALU latencies by a factor of about 3, all cache and memory latencies by a factor of about 10, and also memory throughput by a significant factor (don't have a number, sorry).
Add some nice stuff like texture samplers, shared memory (== local data store) and some hardware support for divergent control flows, and you arrive more or less at an NVidia GPU.

Again, Phantom's desciption is way more accurate, but if you think in CPU terms, those are probably the key differences.

Edited by Ohforf sake, 12 July 2014 - 05:18 AM.


#14 fir   Members   -  Reputation: -443

Like
0Likes
Like

Posted 12 July 2014 - 06:14 AM

does such hardware operate on one 'lite' adress space?

I understand from that above that in such hardware there are

two kinds of threads one is strictly parallel 'threads' that shares the same instruction pointer (but i lost the track how many such threads are in case but someone mentioned 22 thousands of such scalar channells, or so)  but there are also real separate execution track machines, where each one has seperate instruction pointer and can execute distinct assembly chunk of code

 

if so - taking this second approach, the code to execute on each 

of those track machines must be provided in some form to them.

my main question is - how those codes are provided to them,

is there some over assembly routine that  is some code program that assigns sperate assembly routines to track machines and coordinates it?



#15 Tribad   Members   -  Reputation: 805

Like
0Likes
Like

Posted 12 July 2014 - 06:20 AM

Two things.

Micro-Code and Hardware.

 

As of any other processing unit you can manage it in hardware, Zilogs Z80 CPU, or the most times these days, micro programming. Some parts are better build in hardware others are better build in some type of microcode.



#16 Hodgman   Moderators   -  Reputation: 27790

Like
2Likes
Like

Posted 12 July 2014 - 07:29 AM

The front-end of the GPU processes very high-level/complex instructions - basically the result of Draw/Dispatch commands from GL/D3D. This front-end reads/executes these commands, which results in work occurring in the shader cores.

E.g. The front-end might execute an instruction that says to execute a compute shader for 128x128 items. It then creates 128x128=16384 "threads" and groups them into 16384/64=256 "waves" (because it uses 64-wide SIMD to work on 64 "threads" at once). Each of these "waves" is like a CPU-thread, having it's own instruction-pointer, execution state/register file, etc... The GPU then basically "hyperthreads" those 256 "waves". If it's only got 1 "processor", then it will only execute 1 wave (64 "threads") at a time. If it has to stall due to a cache-miss/etc, it will save the execution state and switch to a different wave (which will have its own instruction pointer, etc).

#17 fir   Members   -  Reputation: -443

Like
0Likes
Like

Posted 12 July 2014 - 07:57 AM

The front-end of the GPU processes very high-level/complex instructions - basically the result of Draw/Dispatch commands from GL/D3D. This front-end reads/executes these commands, which results in work occurring in the shader cores.

E.g. The front-end might execute an instruction that says to execute a compute shader for 128x128 items. It then creates 128x128=16384 "threads" and groups them into 16384/64=256 "waves" (because it uses 64-wide SIMD to work on 64 "threads" at once). Each of these "waves" is like a CPU-thread, having it's own instruction-pointer, execution state/register file, etc... The GPU then basically "hyperthreads" those 256 "waves". If it's only got 1 "processor", then it will only execute 1 wave (64 "threads") at a time. If it has to stall due to a cache-miss/etc, it will save the execution state and switch to a different wave (which will have its own instruction pointer, etc).

 

well maybe most is clear, this is a set of processors that execute assembly code - though one important thing was not clearly answered (i know that maybe it is a hard to answer) - if this set

of processors is fully programable or some part of this generall assembly code flow is hardcoded in hardware

For example if there is a set of 64 worker processors (waves) there must be some schedular coordinator over it -is this fully programable assembly executor or is this something more constrained and not flexibly programable 

 

is this archotecture like 1 schedluling processor + 64 worker processors?

 

I also do not know how constrained are worker 'processors' are they able to run any kode like real cpu?



#18 Ohforf sake   Members   -  Reputation: 1478

Like
0Likes
Like

Posted 12 July 2014 - 08:47 AM

The "worker processors", as you call them, are turing complete.

Edit: I'm assuming, with "scheduling processor" you are referring to the warp/CU schedulers.
There is no scheduling processor. The warp schedulers (there can be more then one per core) are hardcoded. They have to make a decision every cycle, within the cycle. No piece of software can do that. They are like the SMT scheduler in the CPU, you might be able to influence them, but you can't program them. And while you can use _mm_pause to hint a yield on the CPU side, to my knowledge the common APIs do not support s.th. similar for the GPU. It might be, that the drivers can change certain scheduling policies, but if they can they don't expose it in the APIs.

I think NVidia once considered, putting a general purpose ARM core on the GPU die for some driver/management stuff, but I think they never actually went through with it.

Edited by Ohforf sake, 12 July 2014 - 08:56 AM.


#19 fir   Members   -  Reputation: -443

Like
0Likes
Like

Posted 12 July 2014 - 09:17 AM

The "worker processors", as you call them, are turing complete.

Edit: I'm assuming, with "scheduling processor" you are referring to the warp/CU schedulers.
There is no scheduling processor. The warp schedulers (there can be more then one per core) are hardcoded. They have to make a decision every cycle, within the cycle. No piece of software can do that. They are like the SMT scheduler in the CPU, you might be able to influence them, but you can't program them. And while you can use _mm_pause to hint a yield on the CPU side, to my knowledge the common APIs do not support s.th. similar for the GPU. It might be, that the drivers can change certain scheduling policies, but if they can they don't expose it in the APIs.

I think NVidia once considered, putting a general purpose ARM core on the GPU die for some driver/management stuff, but I think they never actually went through with it.

hm thats sad news , i hoped it to be fully programable one sheduling prosessors and a couple of working but flexible

prosessors

 

now it seems that as its no sheduling processor though working processors are physically flexible absence of such flexible sheduling commanders make them less flexible to use (though those are speculations)

 

I dont quite see what this sheduling device is doing, i see you say its something like microcode in cpu.. that is dispatching one assembly stream into channels blocks etc..  If so does that meen that the gpu is able to execute only one input assembly stream 

and onlu paralelises it internally? So even if IP (instructon pointers) are separate those processors are not free to use

as those are covered by something as microcode manager?

 

ps. is that input stream some assembly stream (that later is transformed to seperate assembly streams in waves) or this input stream is more like input array of some data to process?


Edited by fir, 12 July 2014 - 09:19 AM.


#20 phantom   Moderators   -  Reputation: 6798

Like
2Likes
Like

Posted 12 July 2014 - 09:39 AM

I dont quite see what this sheduling device is doing, i see you say its something like microcode in cpu.. that is dispatching one assembly stream into channels blocks etc..  If so does that meen that the gpu is able to execute only one input assembly stream 
and onlu paralelises it internally? So even if IP (instructon pointers) are separate those processors are not free to use
as those are covered by something as microcode manager?


(again, focusing on AMD's as it has the most documentation out there).

You are thinking about things at the wrong level; the GPU is doing more than one thing at once across multiple SIMDs inside multiple compute units (CU) - it's generally best, when talking on this level not to refer to the GPU at all but the internal units.

An stream of instructions is directed at a SIMD in a CU, and each SIMD can maintain 10 such instruction streams itself (so it has 10 instruction pointers). Each CU has four SIMD so it can keep 40 instruction streams in fight at once (each one made up of 64 threads, or instances, of the instruction stream which can have their own data but execute the same instruction).

However the SIMD don't decide what is executed next because the CU has shared resources the programs need to use which is why each CU has a scheduler deciding what to run next. The simplest part of this is deciding which SIMD unit to look at to get each instruction stream (it uses a simple round-robin system), after that it looks at all the wavefronts/instruction streams being executed and decides what to run next.

The choice is based upon the current state of the CU; for example if one wavefront wants to execute a scalar instruction but the scalar unit is currently busy then it won't get to execute. Same goes for local memory reads and writes as well as global reads and writes; if other SIMD wavefronts have taken up the resource then the work can't be carried out.

The reason this needs to be pretty quick is each clock cycle the scheduler has to look at the state of up to 10 wavefronts and decide which instructions to execute; this isn't something which is going to work very well if written in software as a single clock cycle would, at best, be enough to run one instruction.

So, if you want to think about it at the GPU level then if we take the R290X version of the GCN core; it can be running 44 CU * 4 SIMD * 10 waves of work at any given time; that work could be from one program or it could be from 1760 different programs/instruction stream. (Which equates to 112,640 instances of programs running at once) and every cycle 1/10th of those are looked at and work scheduled to run.





PARTNERS