Sign in to follow this  

Vulkan What are your opinions on DX12/Vulkan/Mantle?

This topic is 954 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

https://github.com/boreal-games/magma

 

Decided to write my own comprehensive Mantle headers and loading library so I can use Mantle while waiting for Vulkan.  So far I've filled out [tt]mantle.h[/tt], [tt]mantleDbg.h[/tt], and [tt]mantleWsiWinExt.h[/tt].

 

There are a couple minor issues that I've discovered so far:

  • [tt]grWsiWinGetDisplays[/tt] thinks the output array you pass into it is zero-length
  • The Windows WSI error code values haven't been determined for the aforementioned reason
  • I could only get the functions working for the 64-bit DLLs so far, seems to be a calling convention issue

Over the next few days I'll be writing the other extensions, like the DMA queue extension.

Edited by Boreal Games

Share this post


Link to post
Share on other sites

otherwise I'll just end up adding the same amount of abstraction that DX11 does already, kind of defeating the point.

Porting such that your Direct3D 12 implementation mimics that of Direct3D 11 will leave you will 50% the final performance of Direct3D 11 on Direct3D 12. In other words, definitely do not model your graphics pipeline around that of Direct3D 11 inside Direct3D 12.
 
As for the porting itself, I rather enjoy it.  I am working on Metal right now, and you get a sense of pride each time you re-implement a part of the system in a stable and reliable manner while getting the same results as on every other platform.  I like having a single interface that produces the same results reliably across many API’s.
Naturally, I am an engine programmer, so your mental tolerance for this kind of low-level handling may vary.
 
 
L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites

Porting such that your Direct3D 12 implementation mimics that of Direct3D 11 will leave you will 50% the final performance of Direct3D 11 on Direct3D 12. In other words, definitely do not model your graphics pipeline around that of Direct3D 11 inside Direct3D 12.

It's obviously not ideal, but probably still better performance than just sticking with D3D11. MS showed off a naive port of Futuremark's engine, where there'd just shoe-horned D3D12 into their D3D11-oriented engine (by replacing their D3D11 redundant state removal / caching code with a D3D12 PSO hashmap) and still got ~2x the performance of the original D3D11 version.

Share this post


Link to post
Share on other sites
My numbers are coming from our naïve initial port of our engine for this demo (which did at least largely the same thing Futuremark claims they did):
http://www.rockpapershotgun.com/2015/05/05/square-enix-directx-12-tech-demo-witch-chapter-0-cry/

I don’t trust any company that says they got great performance just by shoehorning.


L. Spiro

Share this post


Link to post
Share on other sites

It is just me, or with this new APIs (at least with D3D12 and after you learned the new basis) writing code and implement things feels more natural and less artificial then older APIs with a higher level of abstraction? Yeah, it is not sill like writing general code that will run on the CPU only, but it feels kinda closer.. Or probably it is just because these API are shorter and you have to remember less calls and structures XD

Edited by Alessio1989

Share this post


Link to post
Share on other sites

It is just me, or with this new APIs (at least with D3D12 and after you learned the new basis) writing code and implement things feels more natural and less artificial then older APIs with a higher level of abstraction?

I agree. I think that's because there is less member and structures ; at least with OpenGL there are often several way to have the same result with very subtle difference.

For instance to create a buffer there are 2 functions, glBufferData and glBufferStorage, which can be used to upload data, and you have 1 function to upload data to a specific range (glBufferSubData), you have 2 functions to map the data (glMapBuffer and glMapBufferRange), there are 2 ways to define VAO (one that binds underlying storage and another one that split the vertex description and the buffer mapping), and so on...

 

With DX12 creation and upload are completly decoupled, there is a single mapping function. There is also an upload function but I never used it so far. It's much nicer.

 

The only "not so natural" things that may come from DX12 is that you need to avoid modifying resources when they are used by a command list. I do this by having dual command allocator and constant buffer that are swapped when a frame is finished.

Share this post


Link to post
Share on other sites

I do this by having dual command allocator and constant buffer that are swapped when a frame is finished.

You are supposed to use signals.


L. Spiro

Share this post


Link to post
Share on other sites

The only "not so natural" things that may come from DX12 is that you need to avoid modifying resources when they are used by a command list. I do this by having dual command allocator and constant buffer that are swapped when a frame is finished.

I find this to be a really natural change to, as it's just the result of being honest about parallel programming :)
All the old APIs have tried really hard to pretend that your grqphics code is single threaded - that your function calls have an immediate result.
In truth all CPU+GPU programming is "multithreaded", so an honest API should reflect that.
When using the old APIs, which hide these details, it's very easy to do horribly slow operations, like accidentally read from write-combined memory regions or synchronize the CPU/GPU to lock a resource. To use these old APIs effectively, you really had to actually know what was happening behind their lies and work with the reality -- e.g. this means that in D3D11 you should already be taking care to avoid modifying a resource that is in use! Lot's of engines already use double buffering or ring buffering on D3D11, which is much more natural now on D3D12.

I do this by having dual command allocator and constant buffer that are swapped when a frame is finished.

You are supposed to use signals.
Double buffering works fine, as long as you can guarantee that the gpu has finished the previous frame before the CPU begins using that buffer... Which requires the use of a signal, yeah :)

We use the same strategy on D3D9/D3D11 for transient vertex data - CPU writes into unsynchronized buffers (NOOVERWRITE flag), swapping which one is used per frame (or which range of the buffer is used each frame). The CPU then just has to wait on an event signalling that the GPU has finished the previous frame.

Share this post


Link to post
Share on other sites
You are supposed to use signals.


L. Spiro

 

Sure, I use signal to wait for frame N to be completed when frame N+1 last command list has been submitted.

I assume that it's still necessary to keep command queue filled so that gpu is always busy.

Share this post


Link to post
Share on other sites

I was just wondering, since game engines like DICE support already support mantle, wouldn't those engines be more likely to switch to Vulkan rather than DirectX 12, since Vulkan is based on Mantle?

Then, they would have a big competitive avantage over other games because they could run on Windows 7 and 8, and not just Windows 10. From the looks of it Vulkan seems to set to be released soon. Imagination Tech already has their GPUs working with Vulkan.

Edited by Ed Welch

Share this post


Link to post
Share on other sites

That's going to depend on the on-the-ground realities of driver and operating system support, when the dust finally settles. Remember, GL sounds like a great cross platform Direct3D killer on paper but doesn't live up to that in reality. We still don't know how Vulkan in real life will be. I expect that major engines, including DICE/Frostbite, will simply support both.

Edited by Promit

Share this post


Link to post
Share on other sites

Direct3D 12 is based on Mantle and some reports say it will replace it.
Vulcan may or may not be is based on (or rather inspired by) Mantle (couldn’t double-check while on my phone).


L. Spiro

Edited by L. Spiro

Share this post


Link to post
Share on other sites

Direct3D 12 is based on Mantle and some reports say it will replace it.
Vulcan may or may not be based on Mantle.


L. Spiro

 

There has never been any mention of D3D12 being based off of Mantle, nor did Microsoft ever make any statement about that. 

AMD however has stated that they provided Khronos with Mantle to use as a base, and from what I can see they pretty much did draw inspiration from it.

Share this post


Link to post
Share on other sites

There has never been any mention of D3D12 being based off of Mantle

“Based off” might be a slightly inaccurate choice of words (since I was just using the words he used), but no need to get pedantic.
http://www.extremetech.com/gaming/177407-microsoft-hints-that-directx-12-will-imitate-and-destroy-amds-mantle

“Imitate” and “inspired by” are words often used (same words used to describe Vulkan’s relationship with Mantle).


L. Spiro

Edited by L. Spiro

Share this post


Link to post
Share on other sites

That is indeed a much more reasonable way to look at it. 

 

I like to see it as AMD lighting a fire under both Microsoft and Krohnos's asses by showing them that developers do truly want and need new and better ways to interact with graphics hardware on PC. If you want to call that Microsoft being inspired by AMD that's totally valid, but the APIs themselves while sharing certain concepts (just like previous versions of DirectX and OpenGL shared concepts) are very very different.

Share this post


Link to post
Share on other sites

I like to see it as AMD lighting a fire under both Microsoft and Krohnos's asses by showing them that developers do truly want and need new and better ways to interact with graphics hardware on PC.

That’? how I see it.


L. Spiro

Share this post


Link to post
Share on other sites

I was just wondering, since game engines like DICE support already support mantle, wouldn't those engines be more likely to switch to Vulkan rather than DirectX 12, since Vulkan is based on Mantle?
Then, they would have a big competitive avantage over other games because they could run on Windows 7 and 8, and not just Windows 10. From the looks of it Vulkan seems to set to be released soon. Imagination Tech already has their GPUs working with Vulkan.

These guys already support half a dozen different APIs. It's cheap for them to internally support both (and then game teams can choose whether to ship both/either/neither based on practical realities at the time).

Also Dx12 isn't just Windows10; it's also Xbone. Console games likely make more money for EA than PC, so optimizing for Xbone is probably important to them.

Off the top of my head, the full list of current APIs (as in, there's a justification for using them for a product right now) is:
Dx9(PC), Dx9.x(360), GCM(Ps3), GNM(Ps4), GXM(PsVita), Dx11(PC), Dx11.x(Xbone), Dx12(PC), Dx12.x(Xbone), GL3(PC), GL4(PC), Mantle(PC), Vulkan(PC), GL|ES2(Mobile), GL|ES3(Mobile), Metal(iOS).
At this point, adding one item to that list seems like a small task! :lol:
[edit]..aaand I forgot Nintendo, add two more![/edit]

Direct3D 12 is based on Mantle and some reports say it will replace it.
Vulcan may or may not be is based on (or rather inspired by) Mantle (couldn’t double-check while on my phone).

Vulkan is very much Mantle derived. Lots of the example Vulkan code that's been shown off so far would compile perfectly fine under the Mantle SDK if you just replace the "vk" prefixes with "gr" :D
I'd go as far to say that Vulkan 1.0 will be thr first public release of Mantle! :lol:

Share this post


Link to post
Share on other sites

Just for laughs...

 

uGgcRYG.jpg

 

Don't know how much is old this page on Mantle manual, but the DX SDK page was there when I joined to the DX12 EAP on October 2014..

 

Anyway looking at the early private DX12 EAP docs the D3D12 API has changed a lot from it announcement to the current public version..

Edited by Alessio1989

Share this post


Link to post
Share on other sites


Don't know how much is old this page on Mantle manual, but the DX SDK page was there when I joined to the DX12 EAP on October 2014..

 

I'm in both the Mantle beta program and the DX12 EAP, the mantle documentation has been around for quite a bit longer as far as I'm aware.

Share this post


Link to post
Share on other sites

Apple has announced their upcoming OS X version El Capitan. Thought that it would be relevant to this thread, as it will bring support for their Metal API to OS X.

 

I have no details, but apparently there has been claims of 50 % improvement in performance and 40 % reduction in CPU usage.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By HateWork
      Hello guys,
      My math is failing and can't get my orthographic projection matrix to work in Vulkan 1.0 (my implementation works great in D3D11 and D3D12). Specifically, there's nothing being drawn on the screen when using an ortho matrix but my perspective projection matrix work fantastic!
      I use glm with defines GLM_FORCE_LEFT_HANDED and GLM_FORCE_DEPTH_ZERO_TO_ONE (to handle 0 to 1 depth).
      This is how i define my matrices:
      m_projection_matrix = glm::perspective(glm::radians(fov), aspect_ratio, 0.1f, 100.0f); m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f); // I also tried 0.0f and 1.0f for depth near and far, the same I set and work for D3D but in Vulkan it doesn't work either. Then I premultiply both matrices with a "fix matrix" to invert the Y axis:
      glm::mat4 matrix_fix = {1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f}; m_projection_matrix = m_projection_matrix * matrix_fix; m_ortho_matrix = m_ortho_matrix * matrix_fix; This fix matrix works good in tandem with GLM_FORCE_DEPTH_ZERO_TO_ONE.
      Model/World matrix is the identity matrix:
      glm::mat4 m_world_matrix(1.0f); Then finally this is how i set my view matrix:
      // Yes, I use Euler angles (don't bring the gimbal lock topic here, lol). They work great with my cameras in D3D too! m_view_matrix = glm::yawPitchRoll(glm::radians(m_rotation.y), glm::radians(m_rotation.x), glm::radians(m_rotation.z)); m_view_matrix = glm::translate(m_view_matrix, -m_position); That's all guys, in my shaders I correctly multiply all 3 matrices with the position vector and as I said, the perspective matrix works really good but my ortho matrix displays no geometry.
      EDIT: My vertex data is also on the right track, I use the same geometry in D3D and it works great: 256.0f units means 256 points/dots/pixels wide.
      What could I possibly be doing wrong or missing?
      Big thanks guys any help would be greatly appreciated. Keep on coding, cheers.
       
    • By TheSargKyle
      My team and I are developing a game engine! We would like as much help as possible. The project is currently hobby only, but pay will be appropriately rolled out to those who work on the engine. people we are looking for are:
      Network Programmer Artist For User Interface Physics Programmer Graphics Programmer Prerequisites wanted, but not needed, are: 
      Intermediate C++ knowledge 1 Yr in Game development Industry Thank you for your intrest in the project. You can contact me at my email: thesargkyle@gmail.com or my discord: TheSargKyle#8978
    • By L. Spiro
      Home: https://www.khronos.org/vulkan/
      SDK: http://lunarg.com/vulkan-sdk/
       
      AMD drivers: http://gpuopen.com/gaming-product/vulkan/ (Note that Vulkan support is now part of AMD’s official drivers, so simply getting the latest drivers for your card should give you Vulkan support.)
      NVIDIA drivers: https://developer.nvidia.com/vulkan-driver (Note that Vulkan support is now part of NVIDIA’s official drivers, so simply getting the latest drivers for your card should give you Vulkan support.)
      Intel drivers: http://blogs.intel.com/evangelists/2016/02/16/intel-open-source-graphics-drivers-now-support-vulkan/
       
      Quick reference: https://www.khronos.org/registry/vulkan/specs/1.0/refguide/Vulkan-1.0-web.pdf
      References: https://www.khronos.org/registry/vulkan/specs/1.0/apispec.html
      https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/
       
      GLSL-to-SPIR-V: https://github.com/KhronosGroup/glslang

      Sample code: https://github.com/LunarG/VulkanSamples
      https://github.com/SaschaWillems/Vulkan
      https://github.com/nvpro-samples
      https://github.com/nvpro-samples/gl_vk_chopper
      https://github.com/nvpro-samples/gl_vk_threaded_cadscene
      https://github.com/nvpro-samples/gl_vk_bk3dthreaded
      https://github.com/nvpro-samples/gl_vk_supersampled
      https://github.com/McNopper/Vulkan
      https://github.com/GPUOpen-LibrariesAndSDKs/HelloVulkan
       
      C++: https://github.com/nvpro-pipeline/vkcpp
      https://developer.nvidia.com/open-source-vulkan-c-api

      Getting started: https://vulkan-tutorial.com/
      https://renderdoc.org/vulkan-in-30-minutes.html
      https://www.khronos.org/news/events/vulkan-webinar
      https://developer.nvidia.com/engaging-voyage-vulkan
      https://developer.nvidia.com/vulkan-shader-resource-binding
      https://developer.nvidia.com/vulkan-memory-management
      https://developer.nvidia.com/opengl-vulkan
      https://github.com/vinjn/awesome-vulkan

      Videos: https://www.youtube.com/playlist?list=PLYO7XTAX41FPg08uM_bgPE9HLgDAyzDaZ

      Utilities: https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator (AMD Memory allocator.)
      https://github.com/GPUOpen-LibrariesAndSDKs/Anvil (AMD Miniature Vulkan engine/framework.)

       
      L. Spiro
    • By hiya83
      (Posted this in graphics forum too, which was perhaps the wrong forum for it)
      Hey, I was wondering if on mobile development (Android mainly but iOS as well if you know of it), if there is a GPUView equivalent for whole system debugging so we can figure out if the CPU/GPU are being pipelined efficiently, if there are bubbles, etc. Also slightly tangent question, but do mobile GPU's have a DMA engine exposed as a dedicated Transfer Queue for Vulkan?
      Thanks!
    • By hiya83
      Hey, I was wondering if on mobile development (Android mainly but iOS as well if you know of it), if there is a GPUView equivalent for whole system debugging so we can figure out if the CPU/GPU are being pipelined efficiently, if there are bubbles, etc. Also slightly tangent question, but do mobile GPU's have a DMA engine exposed as a dedicated Transfer Queue for Vulkan?
  • Popular Now