Jump to content

  • Log In with Google      Sign In   
  • Create Account


Ubik

Member Since 07 Nov 2002
Offline Last Active Today, 09:01 AM
-----

Posts I've Made

In Topic: Cases for multithreading OpenGL code?

17 May 2014 - 06:30 AM

It might even make some sense for Khronos to have some kind of "low-level GL" specification alongsinde OpenGL, because that way the latter could have an actual cross-platform and cross-vendor implementation built on the former. Then the folks who still want or need to use OpenGL would have to fight against just one set of quirks. Well, as long as the LLGL actually had uniformly working implementations...


In Topic: Cases for multithreading OpenGL code?

16 May 2014 - 04:13 PM

Ah, so it's more like different handlers for different tasks instead of three general-purpose units, though by the sound of it it's not even that clear with the distinction between drawing and computing. The latter was for whatever reason my first assumption. From that perspective it felt reasonable to think that there would be no reason to be more of them than there are CPU cores to feed them with commands. Teaches me to not make quick assumptions!

 

Thanks for taking the time to tell about this. Even though I obviously haven't delved into these lower level matters much, they are interesting to me too.


In Topic: Cases for multithreading OpenGL code?

16 May 2014 - 11:09 AM

Well, it's only a partly parallelisable problem as the GPU is reading from a single command buffer (well, in the GL/D3D model, the hardware doesn't work quite the same as Mantle shows giving you 3 command queues per device but still...) so at some point your commands have to get into that stream (be it by physically adding to a chunk of memory or inserting a jump instruction to a block to execute) so you are always going to a single thread/sync point going on.

However, command sub-buffer construction is a highly parallelisable thing, consoles have been doing it for ages, the problem is the OpenGL mindset seems to be 'this isn't a problem - just multi-draw all the things!' and the D3D11 "solution" was a flawed one because of how the driver works internally.

D3D12 and Mantle should shake this up and hopefully show that parallel command buffer construction is a good thing and that OpenGL needs to get with the program (or, as someone at Valve said, it'll get chewed up by the newer APIs).

Good clarification. Makes sense that even if the GPU has lots of pixel/vertex/computing units, the system controlling them isn't necessarily as parallel-friendly. For a non-hw person the number three sounds like a curious choice, but in any case it seems to make some intuitive sense to have the number close to a common number of CPU cores. That's excluding hyper-threading but that's an Intel thing so doesn't matter to folks at AMD. (Though there's the consoles with more cores...)

 

I'm wishing for something nicer than OpenGL to happen too, but it's probably going to take some time for things to actually change. Not on Windows here, so the wait is likely going to be longer still. Might as well use GL in the mean time.

 

Doing texture streaming in parallel works great on all GPUs I've tested on, which include a shitload of Nvidia GPUs, at least an AMD HD7790 and a few Intel GPUs. It's essentially stutter-free.

Creating resources or uploading data on a second context is what I've mostly had in mind earlier. I did try to find info on this, but probably didn't use the right terms because I got the impression that actually parallel data transfer isn't that commonly supported.

 

I've now thought that if I'm going to add secondary context support anyway, it will be in a very constrained way, so that the other context (or wrapper for it to be specific) won't be a general purpose one but targeting things like resource loading specifically. That could allow me to keep the complexity at bay.


In Topic: Cases for multithreading OpenGL code?

16 May 2014 - 12:12 AM

Alright, I honestly hadn't expected this to be so clear-cut. A bit sad and maybe a little ironic too that GPU rendering can't be parallelized from client side too, with OpenGL anyway.

 

Thank you for sharing the knowledge!


In Topic: Managing OpenGL Resources

06 April 2014 - 01:04 PM

I personally chose the reference way, specifically using shared_ptrs. A major reason being that OpenGL specification itself uses language that either directly talks about a reference count or otherwise mentions that deleted resources only actually die when nothing refers to them anymore. So when an index buffer is attached to an vertex array, my vertex array object is given a (wrapped) shared_ptr that points to the index buffer.

Edit: I think some drivers were/are buggy specifically with the relation of index buffers and vertex arrays, that binding a vertex array doesn't cause the related index buffer being bound automatically. When the vertex array object in my code has a reference to the index buffer, it could work around that bug by explicitly binding the buffer.

PARTNERS