Renderer Architecture - Submission

Started by
5 comments, last by nickyc95 6 years, 7 months ago

Hi,

I posted on here a while back about rendering architecture and came away with some great information.

I am planning on implementing a render queue which collects the visible objects in the scene and sorts them based on different criteria to minimise state change etc..

The thing I am currently undecided about is: what is the best way to submit my draw calls?

(I am wanting to support both OpenGL and Vulkan)

At the moment I have two ideas for how I can handle it.

  1. The renderable handles the rendering (i.e. It calls renderContext->BindVertexBuffer(...) etc) and setup the renderer state
    1. Pro- Each renderable is full in control of how it renders
    2. Con - Have to manually manage state
  2. The renderable pushes RenderCommands (DrawMesh, DrawMeshIndexed etc) into a CommandBuffer that gets executed by the RenderBacked at the end of the frame
    1. Pro - Stateless
    2. Con - Seems more difficult to extend with new features
    3. Pro/Con - The front end only has a subset of rendering capabilities

 

There are more pros / cons for each, but I have listed a couple to help show my thinking..

 

Any one have any comments on either of these two approaches or any other approaches that are typically used?

 

Thanks

Advertisement

I actually work with a more agnostic rendering system for my renderables. Most objects require the same resources , vertex buffers, index buffers , constant buffer for transform, etc. so what I do is create a "renderobject" that the renderable fills out for each draw call and sends it to the renderer to process. The renderer is what will actually translate the render object into the command list 

2.1. Once you go stateless, you won't go back :D

2.2. New features get added very rarely to D3D/GL/Vulkan. Adding a new GL extension/etc to the stateless API is a bit more code, but I wouldn't really say it's any more difficult.

2.3. Why does this expose less capabilities than option #1? Anything that you can expose in option #1, you can also put into your stateless command architecture.

My thoughts on this are here: http://www.goatientertainment.com/downloads/Designing a Modern GPU Interface.pptx

1 hour ago, Hodgman said:

2.1. Once you go stateless, you won't go back :D

2.2. New features get added very rarely to D3D/GL/Vulkan. Adding a new GL extension/etc to the stateless API is a bit more code, but I wouldn't really say it's any more difficult.

2.3. Why does this expose less capabilities than option #1? Anything that you can expose in option #1, you can also put into your stateless command architecture.

My thoughts on this are here: http://www.goatientertainment.com/downloads/Designing a Modern GPU Interface.pptx

Thats fair :)

I've read through your presentation previously and had a quick glance over it again recently.

From what I gather:

  • You gather all the visible objects 
  • You build a list of DrawItems - which basically contain all the data for a single draw call (state, resources, vertex data)
  • Sort the draw items
  • Renderer executes the DrawItems (all at once at the end of a frame?)

 

Is that a good simplified example of what your engine does?

Also, does your engine use this for all type of drawing? i.e. Model, UI, Particles etc

Yeah pretty much, I:

  • Have a bunch of different rendering systems (models, ui, particles, etc), which know how to draw different kinds of things. These will ideally pre-create a bunch of draw-items.
  • Each frame, I query those systems to fill some containers up with draw-items.
  • There can be more than one container/queue -- e.g. one for opaque objects, one for transparent, one for each shadow-map, etc... The actual set of containers in use is defined by a "render pipeline" object.
  • Those queues/containers get sorted (each might be sorted differently).
  • Each individual container is then submitted alongside a "render pass" object, into a device context. 
  • They don't all have to be submitted at the time time, but generally all rendering submission does occur in the same big chunk of code.
On 8/15/2017 at 0:57 AM, Hodgman said:

Yeah pretty much, I:

  • Have a bunch of different rendering systems (models, ui, particles, etc), which know how to draw different kinds of things. These will ideally pre-create a bunch of draw-items.
  • Each frame, I query those systems to fill some containers up with draw-items.
  • There can be more than one container/queue -- e.g. one for opaque objects, one for transparent, one for each shadow-map, etc... The actual set of containers in use is defined by a "render pipeline" object.
  • Those queues/containers get sorted (each might be sorted differently).
  • Each individual container is then submitted alongside a "render pass" object, into a device context. 
  • They don't all have to be submitted at the time time, but generally all rendering submission does occur in the same big chunk of code.

OK cool, thats pretty much what I thought.

Having had some free time to look into this type of system, I have a few more questions: 

  1. How do you handle draws that need to be done in a specified order? 
    1. My current thought are with things like Post-Processing where you might need to do things in a specific order (Draw Bloom, then draw DOF  (definitely a bad example but hopefully you understand, also please correct me if my thinking is wrong / this would never be an issue))
  2. How do you clear your render targets?
    1. How do you ensure that happens before you render?
    2. Do you explicitly do it before rendering your DrawItems (via the RenderPipeline)?
  3. How do you update your Uniforms / Constant buffers?
    1. Are they part of the DrawItem and get set when drawing that item?
  4. How would you handle Compute in this system?
  5. Is a DrawItem a Base class for other Render Operations / is it derived from some GPUCommand class that can be used to submit different commands?

 

That all for now :)

Thanks

This topic is closed to new replies.

Advertisement