Jump to content

  • Log In with Google      Sign In   
  • Create Account


Tispe

Member Since 02 Oct 2010
Offline Last Active Today, 07:47 AM

Posts I've Made

In Topic: DX12 - Documentation / Tutorials?

23 September 2014 - 12:31 AM

Let's see if I got this right. So the behind the scenes paging in and out based on the windows virtual gpu sharing stuff is not something DX12 game developers will see (if windows want to evics something for another process)?

 

Instead DX12 game developers can predict what textures we want to draw soon and start streaming thoese textures in beforehand. But DX12 does not guarantee that a texture currently in use will not be tossed out of VRAM by windows? Which is transparent to the programmer (goes on in the background)? In which case the GPU is stalled until the resource is paged back in?

 

So we can expect better performance, given that windows aren't paging things in and out in the background?


In Topic: DX12 - Documentation / Tutorials?

17 September 2014 - 11:31 PM

But if you never use more then physical available VRAM in a scene, why would you ever need to page data in and out during that scene? Is it not only when new things come into the scene and old stuff leaves the scene we gotta put them in and out of gpu memory? Does this memory management stuff really increase framerate? Or is it just loading times that gets better?

 

Also, what data is actually "paged out", unless the compute shader make some data for the CPU, what is there to "copy out"?


In Topic: DX12 - Documentation / Tutorials?

17 September 2014 - 12:40 AM


Tiled resources tie in with the virtual address space stuff. Say you've got a texture that exists in an allocation from pointer 0x10000 to 0x90000 (a 512KB range) -- you can think of this allocation being made up of 8 individual 64KB pages.
Tiled resources are a fancy way of saying that the entire range of this allocation doesn't necessarily need to be 'mapped' / has to actually translate to a physical allocation.
It's possible that 0x10000 - 0x20000 is actually backed by physical memory, but 0x20000 - 0x90000 aren't actually valid pointers (much like a null pointer), and they don't correspond to any physical location.
This isn't actually new stuff -- at the OS level, allocating a range of the virtual address space (allocating yourself a new pointer value) is actually a separate operation to allocating some physical memory, and then creating a link between the two. The new part that makes this extremely useful is a new bit of shader hardware -- When a shader tries to sample a texel from this texture, it now gets an additional return value indicating whether the texture-fetch actually suceeded or not (i.e. whether the resource pointer was actually valid or not). With older hardware, fetching from an invalid resource pointer would just crash (like they do on the CPU), but now we get error flags.
 
This means you can create absolutely huge resources, but then on the granularity of 64KB pages, you can determine whether those pages are physically actually allocated or not. You can use this so that the application can appear simple, and just use huge textures, but then the engine/driver/whatever can intelligently allocate/deallocate parts of those textures as required.

 

So what you are saying is that we CAN have 2GB+ of game resources allocated on the GPU VRAM using virtual addresses just fine. But only when we need them we need to tell the driver to actually page things in to VRAM?

 

Assume now that a modern computer has atleast 16GB of system memory. And a game has 8GB of resources and the GPU has 2GB VRAM. So in this situation a DX12 game would just copy all game data from disk to system memory (8GB), then allocate those 8GB to the VRAM and create 8GB of resources, even though the physical limit is 2GB. Command queues would then tell the driver what parts of those 8GB to page in and out? But is that not just what managed pool does anyway?


In Topic: DX12 - Documentation / Tutorials?

13 September 2014 - 05:02 AM


It's always been the case that you shouldn't use more memory than the GPU actually has, because it results in terrible performance. So, assuming that you've always followed this advice, you don't have to do much work in the future 

 

So in essence, a game rated to have minimum 512MB VRAM (does DX have memory requirements?) never uses more then that for any single frame/scene?

 

You would think that AAA-games that require tens of gigabytes of disk space would at some point use more memory in a scene then what is available on the GPU. Is this just artist trickery to keep any scene below rated gpu memory? 


In Topic: DX12 - Documentation / Tutorials?

08 September 2014 - 01:57 AM

The DX12 overview indicates that the "unlimited memory" that the managed pool offers will be replaceable with costum memory management.

 

Say your typical low end graphics card has 512MB - 1GB of memory. Is it realistic to say that the total data required to draw a complete frame is 2GB, would that mean that the GPU memory would have to be refreshed 2-5+ times every frame?

 

Do I need to start batching based on buffer sizes? 


PARTNERS