What are your opinions on DX12/Vulkan/Mantle?

Started by
120 comments, last by Ubik 8 years, 10 months ago

Many years ago, I briefly worked at NVIDIA on the DirectX driver team (internship). This is Vista era, when a lot of people were busy with the DX10 transition, the hardware transition, and the OS/driver model transition. My job was to get games that were broken on Vista, dismantle them from the driver level, and figure out why they were broken.

...


That was very interesting, thanks for that!

A descriptor is a texture-view, buffer-view, sampler, or a pointer
A descriptor set is an array/table/struct of descriptors.
A descriptor pool is basically a large block of memory that acts as a memory allocator for descriptor sets.

So yes, it's very much like bindless handles, but instead of them being handles, they're the actual guts of a texture-view, or an actual sampler structure, etc...

Say you've got a HLSL shader with:

...


Also very informative, I'm starting to understand how to think in the "new way".


I'm looking forward to the new APIs (specifically Vulkan). Not only will we get better game performance, but it seems like it will be less of a headache given what Promit said. Less black box under-the-hood state management, the easier it will be to write and debug.
Advertisement

I feel like I need to defend myself a little bit, I am a professional graphics programmer and I've written renderers on Xbox 360 and Wii-U, along with my own side project that can render in DirectX9/11/OpenGL 3.x. I've written a multi-threaded renderer before and already have an idea on how I plan to tackle the new APIs. My initial worry was that in order to get a minimally viable renderer may be extremely painful since my multi-threaded renderer will need to be restructured to create it's own threads. Luckily it already does something along the lines of PSOs since the game logic is sending it's own render commands, I should be able to encapsulate them well enough.

Big concerns for me, (and these are initial thoughts from a GDC presentation on DX12) are:

- Memory residency management. The presenters were talking along the lines of the developers being responsible for loading/unloading graphics resources from VRAM to System Memory whenever the loads are getting too high. This should be an edge case but it's still an entirely new engine feature.

- Secondary threads for resource loading/shader compilation. This is actually a really good thing that I'm excited for, but it does mean I need to change my render thread to start issuing new jobs and maintaining. It's necessary, and for the better good, but another task nonetheless.

- Root Signatures/Shader Constant management

Again really exciting stuff, but seems like a huge potential for issues, not to mention the engine now has to be acutely aware of how frequently the constants are changed and then map them appropriately.

@Promit: Thanks for the insight, that makes me feel better about the potential gains to be made and helps to assuage my fears that adding my own abstractions between the game thread and the render thread won't defeat the purpose of the API.

@Hodgman: I'm actually writing a new engine for the express purpose of supporting the new APIs, and my previous engine also had stateless rendering. (Kinda, the game thread would just append unique commands for whatever states it wanted and those commands would be filtered by a cache before being dispatched to the rendering thread, with the new threading changes I'll likely abandon this approach so that I can add rendering commands from any thread.) I do like your idea of having specific render passes, that would allow me to reuse render commands for shadowing vs shading passes and I'll be able to better generate my command lists/bundles.

I'll also be adding in architecture for Compute Shaders for the first time, so I'm worried that I might be biting off too much at once.

Perception is when one imagination clashes with another

Many years ago, I briefly worked at NVIDIA on the DirectX driver team ...


Congrats for ending up on John Carmack's twitter feed!


Congrats for ending up on John Carmack's twitter feed!
LMAO the clickbait title they guy gave it hahaha

He could have gone all the way:

"Nvidia's ex employee tells you 10 things you wouldn't believe about the company!"

"5 things nvidia isn't telling you about their drivers!"

"Weird facts from nvidia employee, shocking!"

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Overall, I think it's great to see renewed focus on lower CPU overhead. It's sometimes ridiculous just how much more efficient it can be to build command buffers on console vs. PC, and the new API's look poised to close that gap a bit. Mobile in particular is the real winner here: they desperately need an API is more efficient so that they can save more battery power, and/or cram in more draw calls (from what I've heard, draw call counts in the 100's is pretty common on mobile). So far I haven't seen any major screw-ups from the GL camp, so if they keep going this way they have a real shot at dislodging D3D as the defacto API for Windows development. However, I think I still have more trust in MS to actually deliver exactly what they've promised, so I will reserve full judgement until Vulkan is actually released.

Personally, I'm mostly just looking forward to having a PC version of our engine that's more inline with our console version. Bindless is just fantastic in every way, and it's really painful having to go back to the old "bind a texture at this slot" stuff when working on Windows (not to mention it makes our code messy having to support both). Manual memory management and synchronization can also be really powerful, not to mention more efficient. Async compute is also huge if you use it correctly, and hopefully there will be much more discussion about it now that it will be exposed in public API's.

On the flip side, I am a bit concerned about sync issues. Sync between CPU and GPU (or even the GPU with itself) can lead to some really awful, hard-to-track down bugs. It's bad because you might think that you're doing it right, but then you make a small tweak to a shader and suddenly you have artifacts. It's hard enough dealing with that for one hardware configuration, so it's a little scary to imagine what could happen for PC games that have to run on everything. Hopefully there will be some good debugging/validation functionality available for tracking this down, otherwise we will probably end up with drivers automatically inserting sync points to prevent corruption (and/or removing unnecessary syncs for better performance). Either way, beginners are probably in for a rough time. sad.png


On the flip side, I am a bit concerned about sync issues. Sync between CPU and GPU (or even the GPU with itself) can lead to some really awful, hard-to-track down bugs. It's bad because you might think that you're doing it right, but then you make a small tweak to a shader and suddenly you have artifacts. It's hard enough dealing with that for one hardware configuration, so it's a little scary to imagine what could happen for PC games that have to run on everything. Hopefully there will be some good debugging/validation functionality available for tracking this down, otherwise we will probably end up with drivers automatically inserting sync points to prevent corruption (and/or removing unnecessary syncs for better performance). Either way, beginners are probably in for a rough time.

Don't worry, a variety of shipping professional games will somehow make a complete mess of it in final build too rolleyes.gif

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

On the flip side, I am a bit concerned about sync issues. Sync between CPU and GPU (or even the GPU with itself) can lead to some really awful, hard-to-track down bugs. It's bad because you might think that you're doing it right, but then you make a small tweak to a shader and suddenly you have artifacts. It's hard enough dealing with that for one hardware configuration, so it's a little scary to imagine what could happen for PC games that have to run on everything. Hopefully there will be some good debugging/validation functionality available for tracking this down, otherwise we will probably end up with drivers automatically inserting sync points to prevent corruption (and/or removing unnecessary syncs for better performance). Either way, beginners are probably in for a rough time. sad.png

New debugging tools are coming: https://channel9.msdn.com/Events/GDC/GDC-2015/Solve-the-Tough-Graphics-Problems-with-your-Game-Using-DirectX-Tools

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

Slightly off-topic, but I'm starting a new engine before Vulkan or D3D12 are released. Any pointers on how I can prepare my rendering pipeline architecture so that when they are released, I can use them efficiently? I'm planning to start with D3D11 and OpenGL 4.5.

Aether3D Game Engine: https://github.com/bioglaze/aether3d

Blog: http://twiren.kapsi.fi/blog.html

Control

If you sign-up in the DX12 EAP you can access toe source code of the UE4 DX12 implementation.

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

This topic is closed to new replies.

Advertisement