Renderer design - Bitquid / Our Machinery

Started by
4 comments, last by swiftcoder 6 years, 1 month ago

Hi,

Recently I have been looking into a few renderer designs that I could take inspiration from for my game engine. I stumbled upon the BitSquid and the OurMachinery blogs about how they architect their renderer to support multiple platforms (which is what I am looking to do!)

I have gotten so far but I am unsure how a few things that they say in the blogs..

This is a simplified version of how I understand their stuff to be setup:

  • Render Backend - One per API, used to execute the commands from the RendererCommandBuffer and RendererResourceCommandBuffer
  • Renderer Command Buffer - Platform agnostic command buffer for creating Draw, Compute and Resource Update commands
  • Renderer Resource Command Buffer - Platform agnostic command buffer for creation and deletion of GPU resources (textures, buffers etc..)

The render backend has arrays of API specific resources (e.g. VulkanTexture, D3D11Texture ..) and each engine-side resource has a uint32 as the handle to the render-side resource.

Their system is setup for multi-threaded usage (building command buffers in parallel and executing RenderCommandBuffers (not resources) in parallel.

 

One things I would like clarification on

  1. In one of the blog posts they say When the user calls a create-function we allocate a unique handle identifying the resource

    1. Where are the handles allocated from? the RenderBackend?

    2. How do they do it in a thread safe way that's doesn't kill performance?

 

If anyone has any ideas or any additional resources on the subject, that would be great.

 

Thanks

Advertisement

I know neither of the two, or heard about them before. but did you try asking them first? It might help in case nobody else here has heard/a clue about their design/intention?

You might want to check out the bgfx source code. It follows a similar design. It's common in a lot of big AAA engines.

As to your question - the handles are typically just an integer index that is used to look up the pointer in the backend. So for example the client code sends a texture (as raw RGB data) to the backend and says "create this texture for me". The renderer allocates a command in the command buffer and a handle for the new resource and returns the handle to the client. The idea is that all code should go via the API using the handle and never poke into resources directly (because usually that is only allowed from the render thread). So performance isn't too much of a problem - you can lock the handle data structures or even use lock-free techniques.

Cheers,

Sam

 

On 29/01/2018 at 3:22 AM, hyper3d said:
  • Where are the handles allocated from? the RenderBackend?

  • How do they do it in a thread safe way that's doesn't kill performance?

I get mine from the back end. I simply wrap a pool allocator in a mutex. If you've got lots of threads constantly trying to lock the same mutex, you'll kill performance, but if if it's only occasionally locked, it will be rare for one thread to have to wait on another one and the mutex overhead will be quite low. 

On 1/28/2018 at 8:22 AM, hyper3d said:
    1. How do they do it in a thread safe way that's doesn't kill performance?

Thread safe allocation of handles is trivial. Divide your handle space up by the number of threads, and let each thread allocate handles within it's own space.

After allocation those handles are going to be passed back and forth anyway, so there's never a need for two threads to try and allocate the same handle at the same time.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement