does g_p2DTex->SetResource(); moves GPU memory?

Started by
1 comment, last by Matias Goldberg 9 years, 2 months ago

if I do this:

g_p2DTex->SetResource(textureA);

//apply and draw something

g_p2DTex->SetResource(textureB);

//apply and draw something

is that moving "textureB" data where "textureA" data is inside the GPU or just changing pointers?

Advertisement
It changes pointers.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

It just changes pointers.

But what happens inside of D3D11 is much more complex actually. The driver may have decided to page out Texture B from GPU memory because you were not using it (and probably it was running out of VRAM). If that's the case, setting Texture B means the driver will copy the data back from system RAM to VRAM.

And if it's really really really running out of space; it may page out Texture A to make room for Texture B (though it is extremely rare that a driver will page out a texture for another when both are going to be used in the same frame, in this case the driver will probably signal an out of GPU memory error; but if tex A was used in the previous frame and tex B in the next one, this might happen)

Also on a lot of hardware out there switching texture is a "relatively costly" CPU-side driver overhead as the driver needs to prepare all the texture descriptors that have changed. On some hardware this is quite cheap (almost free), on other hardware this has a cost as all their hardware texture registers have to be reset.

All of this is a lot of overhead. While GPU-side this is just switching pointers, internally:

  • The driver needs to track how often textures are being used; and decide to page out the ones that have remain unused for some time.
  • The driver needs to check if the texture needs to be paged in.
  • For some hardware, the driver may need to set all texture descriptors again (not just the ones that have changed) and bring the GPU to a temporary "mini-halt".

OpenGL4 with bindless texture extension gets rid of all this driver overhead thing because it places the burden of managing texture residency on yourself (however **only** DX11-level-hardware from NVIDIA and AMD support bindless, Intel cards can't support it due to hw limitations); and DX12 promises to place the burden on the developer too (which is a good thing for us performance squeezers).

While we wait for the future to arrive, texture arrays are the next best thing; which allow you to choose between textures in the shader and only call SetResource very infrequently; while indirectly controlling residency (if you pack 16 textures together in the same array, the driver has to page them in/out as a whole pack). Though it has its disadvantages too (textures must share same pixel format, same resolution, have lower granularity for paging in/out).

This topic is closed to new replies.

Advertisement