Rendering Architecture

Started by
3 comments, last by Hodgman 7 years, 4 months ago

Hey guys, So I wanted to get some input on different rendering architectures. As of right now I have a fairly generic implementation. Each render proxy has a "Render" function which submits a generic "batch" class that contains vertex buffers, index buffers, materials...etc. This works well, but is limited in the complexity of the pipeline.. I was reading a few articles , specifically "http://advances.realtimerendering.com/destiny/gdc_2015/" and http://www.wihlidal.ca/Presentations/GDC_2016_Compute.pdf. It seems that destiny uses global "feature renderer" classes. This allows the renderer to look at all visible types, i.e looking at all visible static meshes. With this approach, they can bucket LOD , and do physics and animation calculates for only visible meshes, and etc. In the frostbite engine talk, they say that they run a visibility determination pass for all of their meshes prior to running the rest of the passes.. This approach would not be possible with what I am currently doing. My rendering system does not know whether or not it's rendering a static mesh, or an ocean. All it does is bind data to the pipeline and call DrawXXX.. I wanted to know if anyone has any experience with global renderers like a "CMeshRenderer" that takes all visible meshes in a scene, and if you would recommend that approach.

Advertisement
You could first process all the scene data and perform some kind of culling and then pass that data to your renderer. That could be a simple but effective aproach as tour renderer doesnt care about visibility stuff and so on :)

The most simple approach in your case would be to add a set of supported "render types" and a visibility check. Create a enum and let each of your renderables return an enum. Maybe you can wire the way how to render something and the enum values together. You could then group your renderables by render type enum value and render in batches. Would be easy to integrate and to add renderable types. Would not be suitable if you have a strong demand to override the behaviour per renderable somehow.. same for visibility determination. Before you call "render" on something, check ifVisible(Camera) and you have frustum culling implemented. Would be more complicated if you want other kinds of culling though.

Besides my advice, my experience: Nowadays, everything is about batched rendering and/or indirect drawing to avoid gpu state changes and to not waste your cpu time for rendering. If you switch from many buffers (ie per renderable) to a global vertex index buffer combination, you could bind ressources once and fire render commands that only contain some offset values for buffer access. You don't need to bother about architecture anymore, because at a fixed step, or as often as possible, you take a list of game objects, split them in opaque and non-opaque, update gpu buffers and render them. Assuming you are using deferred rendering, as most of the engines seem to do, you are limited by the kind of materials, which you can tackle with subroutines (OpenGL) for example. Of material type enum, less flexible.

thank you both for your responses. yeah, I think the way I'm going to do it is to have each class provide it's type when being submitted to rendering. When they do that... I will disperse them to their proper rendering systems.

It seems that destiny uses global "feature renderer" classes. This allows the renderer to look at all visible types, i.e looking at all visible static meshes. With this approach, they can bucket LOD , and do physics and animation calculates for only visible meshes, and etc. In the frostbite engine talk, they say that they run a visibility determination pass for all of their meshes prior to running the rest of the passes.. This approach would not be possible with what I am currently doing. My rendering system does not know whether or not it's rendering a static mesh, or an ocean. All it does is bind data to the pipeline and call DrawXXX.. I wanted to know if anyone has any experience with global renderers like a "CMeshRenderer" that takes all visible meshes in a scene, and if you would recommend that approach.

I'd have, for example:
* BoundingVolumeCollection -- holds a collection of aabb's / bspheres / etc. Has a const function that is passed a frustum and returns a massive array of bits indicating whether each bounding shape is visible or not.
* ModelCollection -- holds a collection of models, which are collections of lods, which are collections of meshes. Each node within a model can have an index into a bounding-volume collection.
* The logic of the component that owns these two systems would first update the location of the nodes, then query visibility of the bounding volumes, then collect any meshes that are visible. Those collected meshes can then be sorted and submitted.

ModelCollection has knowledge of BoundingVolumeCollection, but not vice versa -- so culling can be done regardless of what kinds of things are actually being culled.
ModelCollection would be a "feature renderer" -- it knows how to render generic "model" files.
You could have other feature renderers -- HeightmapTerrain, ProceduralCharacterCollection, etc -- and they could share the same BoundingVolumeCollection as the ModelCollection, or could use a different instance of a BoundingVolumeCollection.
You can also have two or more ModelCollections - each sharing the same BoundingVolumeCollection, or having unique instances.

I don't use virtual for any of this... But I guess you could make an IFeatureRenderer interface that knows how to collect 'batches'.

This topic is closed to new replies.

Advertisement