• Advertisement
Sign in to follow this  

Vulkan Is Dynamic Draw Call Batching Still Relevant?

Recommended Posts

Hey, So I wanted to ask a simple question pertaining to dynamic batching , as in dynamically building vertex and index buffers to reduce draw calls. I wanted to know that in the age of Vulkan/DirectX12 , dynamic instancing, and compute shader based culling, is dynamic batching still relevant? I know that Engines such as Unreal Engine does not support dynamic instancing / dynamic batching , and Unity supports both. I'm currently building a somewhat open world game, so efficiency is of the greatest importance. Thanks.

Share this post


Link to post
Share on other sites
Advertisement

Dynamically Batching is still very relevant , it's just that it conflicts with instancing so using both is very difficult.

When working with a lot of unique objects batching holds the advantage, that is why it was popular with linear games. The only rule for batching is that it should share the same material/ shader.

Instancing has the advantage when you need a lot of one object, like grass, that repeat. So it's the one you want to focus on when making openworld games.

 

The best would be to use both, batching for unique locations and instancing for large openworld zones. However making a dynamic batch manager that doesn't conflict with instancing is very difficult and makes a engine difficult to use; Lumberyard is a good example.

Most engines like Unreal 4 instead opt for selective batching, by allowing developers to merge objects by hand.

Share this post


Link to post
Share on other sites

Thank you both for your responses. What I tend to dislike about Unity's approach is that , in my opinion, they can only dynamically batch really small meshes, it also seems that now that they support dynamic instancing, a mesh will use instancing before dynamic batching. Do you have any opinion on mesh merging / vertex pulling in comparison to the older batching techniques?

Share this post


Link to post
Share on other sites
I think ti's interesting if your use-cases have a bottleneck or performance issue which could be solved using this technique

Share this post


Link to post
Share on other sites

Thank you both for your responses. What I tend to dislike about Unity's approach is that , in my opinion, they can only dynamically batch really small meshes,

That is how it works you have 65 000 vertices per mesh, you can either have 65 000 single vertex meshes or one 65 000 vertices mesh. Unreal has the same limit except it's called the 64k polygon limit.

What it means is that when you load in a mesh more than 64k-65k vertices it will create a new batch. How many batches a game can use depends on the graphics card not the engine.

 

The reason Unity can only batch small meshes is because of UV maps and Smooth groups. A UV seam splits the vertices where it is assigned, in games this matters a lot because a single UV map can add as much as 30%-50% to the vertex count. Smooth groups often add 10%-20% vertices to the count.

So a simple 1000 vertices mesh with a UV map and only one smooth group is more like a 1500- 1800 vertices. And you end up getting a lot less meshes per batch than expected.

 

Do you have any opinion on mesh merging / vertex pulling in comparison to the older batching techniques?

My opinion on vertex pulling and mesh merging is that both are just different ways of batching. The performance gain and downsides is the same as batching.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By mark_braga
      I am working on a compute shader in Vulkan which does some image processing and has 1024 * 5=5120 loop iterations (5 outer and 1024 inner)
      If I do this, I get a device lost error after the succeeding call to queueSubmit after the image processing queueSubmit
      // Image processing dispatch submit(); waitForFence(); // All calls to submit after this will give the device lost error If I lower the number of loops from 1024 to 256 => 5 * 256 = 1280 loop iterations, it works fine. The shader does some pretty heavy arithmetic operations but the number of resources bound is 3 (one SRV, one UAV, and one sampler). The thread group size is x=16 ,y=16,z=1
      So my question - Is there a hardware limit to the number of loop executions/number of instructions per shader?
    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By khawk
      CRYENGINE has released their latest version with support for Vulkan, Substance integration, and more. Learn more from their announcement and check out the highlights below.
      Substance Integration
      CRYENGINE uses Substance internally in their workflow and have released a direct integration.
       
      Vulkan API
      A beta version of the Vulkan renderer to accompany the DX12 implementation. Vulkan is a cross-platform 3D graphics and compute API that enables developers to have high-performance real-time 3D graphics applications with balanced CPU/GPU usage. 

       
      Entity Components
      CRYENGINE has addressed a longstanding issue with game code managing entities within the level. The Entity Component System adds a modular and intuitive method to construct games.
      And More
      View the full release details at the CRYENGINE announcement here.

      View full story
    • By khawk
      The AMD GPU Open website has posted a brief tutorial providing an overview of objects in the Vulkan API. From the article:
      Read more at http://gpuopen.com/understanding-vulkan-objects/.


      View full story
    • By HateWork
      Hello guys,
      My math is failing and can't get my orthographic projection matrix to work in Vulkan 1.0 (my implementation works great in D3D11 and D3D12). Specifically, there's nothing being drawn on the screen when using an ortho matrix but my perspective projection matrix work fantastic!
      I use glm with defines GLM_FORCE_LEFT_HANDED and GLM_FORCE_DEPTH_ZERO_TO_ONE (to handle 0 to 1 depth).
      This is how i define my matrices:
      m_projection_matrix = glm::perspective(glm::radians(fov), aspect_ratio, 0.1f, 100.0f); m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f); // I also tried 0.0f and 1.0f for depth near and far, the same I set and work for D3D but in Vulkan it doesn't work either. Then I premultiply both matrices with a "fix matrix" to invert the Y axis:
      glm::mat4 matrix_fix = {1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f}; m_projection_matrix = m_projection_matrix * matrix_fix; m_ortho_matrix = m_ortho_matrix * matrix_fix; This fix matrix works good in tandem with GLM_FORCE_DEPTH_ZERO_TO_ONE.
      Model/World matrix is the identity matrix:
      glm::mat4 m_world_matrix(1.0f); Then finally this is how i set my view matrix:
      // Yes, I use Euler angles (don't bring the gimbal lock topic here, lol). They work great with my cameras in D3D too! m_view_matrix = glm::yawPitchRoll(glm::radians(m_rotation.y), glm::radians(m_rotation.x), glm::radians(m_rotation.z)); m_view_matrix = glm::translate(m_view_matrix, -m_position); That's all guys, in my shaders I correctly multiply all 3 matrices with the position vector and as I said, the perspective matrix works really good but my ortho matrix displays no geometry.
      EDIT: My vertex data is also on the right track, I use the same geometry in D3D and it works great: 256.0f units means 256 points/dots/pixels wide.
      What could I possibly be doing wrong or missing?
      Big thanks guys any help would be greatly appreciated. Keep on coding, cheers.
       
  • Advertisement