Jump to content
  • Advertisement
Sign in to follow this  
Tispe

DX12 DX12 - Documentation / Tutorials?

This topic is 1476 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement

 

 

D3D12 will be the same, except will perform much better (D3D11 deferred context do not actually provide good performance increases in practice... or this is the excuse of AMD and Intel which do not support driver command lists).

Fixed cool.png

 

AMD support them on Mantle and multiple game console APIs. It's a back end D3D (Microsoft code) issue, forcing a single-thread in the kernel mode driver be responsible for kickoff. The D3D12 presentations have pointed out this flaw themselves.

 

 
I know that D3D11 command lists are far away from be perfect, but AMD was the first IHV to sell DX11 GPUs (Radeon HD5000 Series) claiming "multi-threading support" as one of the big features of theirs graphics cards.
 
Here what AMD proclaims: 
 
http://www.amd.com/en-us/products/graphics/desktop/5000/5970
 

  • Full DirectX® 11 support
    • Shader Model 5.0
    • DirectCompute 11
    • Programmable hardware tessellation unit
    • Accelerated multi-threading
    • HDR texture compression
    • Order-independent transparency

 

They also claimed the same thing with DX 11.1 GPUs when WDDM 1.2 drivers came out. 

 

Yes, their driver is itself "multi-threaded" (I remember few years ago it scaled well on two cores with half CPU driver overhead), and you can always use deferred context in different "app-threads" (since they are emulated by the D3D runtime, more CPU overhead yeah!), but that's not the same thing.

 

Graphics Mafia.. ehm NVIDIA supports driver command lists, and where used in the correct they work just fine (big example: civilization 5). Yes, they also "cheat" consumers  on feature level 11.1 support (as AMD "cheated" consumers.. and developers! on tier-2 tiled resources support) and they really like to break old application and games compatibility (especially old OpenGL games), but those are other stories.

Edited by Alessio1989

Share this post


Link to post
Share on other sites

 

what kind of documentation are you searching for?


Sorry, should've been more specific. I'm referring to documentation on the binary format to allow you to produce/consume compiled shaders like you can with SM1-3 without having to pass through Microsoft DLLs or HLSL. Consider projects like MojoShader that could make use of this functionality to decompile SM4/5 code to GLSL when porting software or a possible Linux D3D11 driver that would need to be able to compile compiled SM4/5 code into Gallium IR and eventually GPU machine code.

There's also no way with SM4/5 to write assembly and compile it which is a pain for various tools that don't work to work through HLSL or the HLSL compiler.

 

 

I'm not sure what the actual problem you have here is.  It's an ID3DBlob.

 

If you want to load a precompiled shader, it's as simple as (and I'll even do it in C, just to prove the point) fopen, fread and a bunch of ftell calls to get the file size.  Similarly to save one it's fopen and fwrite.

 

Unless you're looking for something else that Microsoft actually have no obligation whatsoever to give you, that is.....

Share this post


Link to post
Share on other sites

The DX12 overview indicates that the "unlimited memory" that the managed pool offers will be replaceable with costum memory management.

 

Say your typical low end graphics card has 512MB - 1GB of memory. Is it realistic to say that the total data required to draw a complete frame is 2GB, would that mean that the GPU memory would have to be refreshed 2-5+ times every frame?

 

Do I need to start batching based on buffer sizes? 

Share this post


Link to post
Share on other sites

Is it realistic to say that the total data required to draw a complete frame is 2GB

Unless you have another idea, this is completely unrealistic. The amount of data required for one frame should be somewhere in the order of megabytes...  And DX11.2 minimizes the memory requirement with "tiled resources".

 

 


would that mean that the GPU memory would have to be refreshed 2-5+ times every frame?

This is not the case, but even if it was... The article is not very clear on this. It says that the driver will tell the operating system to copy resources into GPU memory (from system memory) as required, but only the application can free those resources once all of the queued commands using those resources have been processed by the GPU. It's not clear if the resources can also be released (from GPU memory, by the OS) during the processing of already queued commands, to make room for the next 512MB (or 1GB, or whatever size) of your 2GB data. But my guess is that this is not possible. This would imply that the application's "swap resource" request could somehow be plugged-into the driver/GPU's queue of commands, to release unused resource memory intermediately, which is probably not possible, since (also according to the article), the application has to wait for all of the queued commands in a frame to be executed, before it knows which resources are no longer needed. Also, "the game already knows that a sequence of rendering commands refers to a set of resources" - this also implies that the application (not even the OS) can only change resource residency in-between frames (sequence of rendering commands), not during a single frame. Also, DX12 is only a driver/application-side improvement over DX11. Adding memory management capabilities to the GPU itself would also require a hardware-side redesign.

 

 


Do I need to start batching based on buffer sizes?

If you think that you'll need to use 2GB (or more than the recommended/available resource limits) of data per frame, then yes. Otherwise, no.

Edited by tonemgub

Share this post


Link to post
Share on other sites

Thanks, Hodgman! Really good explanation!

 

 

 


Quote

Also, DX12 is only a driver/application-side improvement over DX11. Adding memory management capabilities to the GPU itself would also require a hardware-side redesign.

This kind of memory management is already required in order to implement the existing D3D runtime - pretending that the managed pool can be of unlimited size requires that the runtime can submit partial command buffers and page resources in and out of GPU-RAM during a frame.

What I meant to point out by that (and this was the main conclusion I reached with my train of thoughts) was that the CPU is still the one doing the heavy-lifting when it comes to memory management. But now that I think about it, I guess it makes no difference - the main bottleneck is having to do an extra "memcpy" when there's not enough video memory.

 

For your explanation of how DMA could be used to make this work, that method would have to also be used -for example- when all of that larger-than-video-memory resource is being accessed by the GPU in the same, single shader invocation? Or that shader invocation would (somehow) have to be broken up into the subsequently generated command lists? Does that mean that the DirectX pipeline is also virtualized on the CPU?

 

Anyway, I think the main question that must be answered here is if the resource limits imposed by DX11 will go away In DX12. Yes, theoretically (and perhaps even practically) the CPU & GPU could be programmed into working together to provide virtually unlimited memory, but will this really be the case with DX12? From what I can tell, that article implies that the "unlimited memory" has to be implemented as "custom memory management" done by the application - not the runtime, nor the driver or GPU. This probably also means that it will be the application's job to split the processing of the large data/resources into multiple command lists, and I don't think the application will be allowed to use that DMA-based synchronisation method (or trick? smile.png ) that you explained.

Edit: Wait. That's how tiled resources already work. Never mind... :)

Edited by tonemgub

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!