How to build a "renderer"

Started by
7 comments, last by IvicaKolic 8 years ago

I render with OpenGL and I wonder how common engines build their Renderer class.

1. Does this class hold data members like: vec4 m_clearColor; bool m_isWireframe, or they they just contain functions that abstract the actual opengl routines? Something like


void MyRenderer::setWireframe(bool flag)
{
    glPolygonMode( GL_FRONT_AND_BACK, flag ? GL_FILL : GL_FILL );
}

2. How does the whole framework use this class? Does it even have to be a class?

I know I asked really broad questions so if you can link me to other sources it would be nice too.

Advertisement

What I do

Just a question for you rendering architects: Do you have some central place that does all the drawing, regardless of what is actually drawn, or do you have something like custom render modules where each implements their own rendering, including setting up buffers, fixed function state, shaders, submitting draw commands etc.?

I have been thinking... it should be enough when those custom renderCmds gather the data that is required to render somewhere and submit it to a central renderer (which buffers do I need? which shaders do I need? etc.). Would get rid of a lot of boiler plate (only a couple hundred lines of rendering code for each API backend? one can dream), and that central renderer would have all the info available to batch stuff efficiently. But I'm not sure how feasible this is in practice until you end up with lots of ifs() that handle special cases anyways, or if there's something else that prevents it from being as clean as it sounds.

edit: Having read it now, the presentation by Hodgman above seems to implement exactly this. Does it work for everything? No custom drawing code needed anymore?

Most renderers are all the same.

Typically you have an abstraction interface for all the API's that it supports. Then because Directx's Effect files are no longer useable, they will typically build their own passes. Which to be fair, is more efficient this way anyways.

I prefer generic design where graphic engine only consist of a RenderQue, RenderParamTracker and TokenProcessor (meshes, materials and vertex definitions have those). INI file has definition of render modes, render targets, render params, token rules, etc...

It's tiny and efficient and you don't have to touch engine code ever (if implemented properly)

I prefer generic design where graphic engine only consist of a RenderQue, RenderParamTracker and TokenProcessor (meshes, materials and vertex definitions have those). INI file has definition of render modes, render targets, render params, token rules, etc...

It's tiny and efficient and you don't have to touch engine code ever (if implemented properly)

Essentially something like this.

My current design seperates the engine's data from it's own.When The engine calls the renderer, the rendering logic does not need to know about the particulars of the actual engine's data. Instead, it's api recieves copies and transforms the data to a state that is needed.

When it comes time to render, the culling system will work independently of the current game state, it'll display a latent frame instead, and cull data based on what it has. This also means that the Renderer uses it's own octree for it's own processes. Primarily culling, But also as a way of determining some broader spectrum of LOD.

The engine's logic has it's own octree for logical processes. RayCasting, Scripts that effect certain regions of land, Navmesh Collisions, etc.

I prefer generic design where graphic engine only consist of a RenderQue, RenderParamTracker and TokenProcessor (meshes, materials and vertex definitions have those). INI file has definition of render modes, render targets, render params, token rules, etc...

It's tiny and efficient and you don't have to touch engine code ever (if implemented properly)

Essentially something like this.

My current design seperates the engine's data from it's own.When The engine calls the renderer, the rendering logic does not need to know about the particulars of the actual engine's data. Instead, it's api recieves copies and transforms the data to a state that is needed.

When it comes time to render, the culling system will work independently of the current game state, it'll display a latent frame instead, and cull data based on what it has. This also means that the Renderer uses it's own octree for it's own processes. Primarily culling, But also as a way of determining some broader spectrum of LOD.

The engine's logic has it's own octree for logical processes. RayCasting, Scripts that effect certain regions of land, Navmesh Collisions, etc.

For occlusion culling I'm using view frustum culling + Hi-Z algorithm (actually it's Lo-Z because of inverted z-buffer).

Main engine script does this:

SetRenderMode("PrepForHiZ"); // This will invoke setting of render targets, rendering quads, and after rendering is done, setting shader textures of those render targets, etc...

RenderOcclussionSpheres(); // this is main engine command - it keeps track of current objects (each has his own id and pos/radius)

void * flags = LockGraphicBuffer("occlusion_test_render_target", (X + 10)%10); // I'm having 10 frames delay (and 10 occlusion buffers)

SetOcclussionFlagsForObjects(flags);

Graphic engine doesn't even know that it did occlusion calculation. It doesn't understand what data sent to it means - only how to send it to graphic card.

That way it can be forward renderer, deffered rendered, forward+ rendered, ray tracer, <some new renderer that hasn't been invented yet>. It doesn't care what the data is - only how to render it efficiently.

As for token processor:

Rendering_technique_name = <Render Mode Name> + <Remaining Mesh Tokens> + <Remaining Material Tokens> + <Remaining Vertex Tokens>

SAMPLE: When rendering depth, there is no need to have tokens that have something to do with color. Then if all texture tokens are removed from material, Vertex token that represents texture coordinates will be removed (if there is no alpha mask texture). At the end you end up with only few tokens. Vertex token that represent NORMAL will probably also be removed (since it isn't even registered for RenderDepth render mode).

Once you know rendering technique name, you know which render parameters need to be sent to graphic card - and RenderParamTracker does that efficiently (without repetition).

The point is: there is no harcoding of anything graphic wise (shaders + graphic data come with the game files - not engine).

You write graphic engine once and then you don't touch it for years.

If some new way of rendering/post process effect gets published you don't change the engine - you just stuff the shader into first game package and add a few lines in the INI file. Maybe add few SetRenderModes("blablabla") into rendering script for some new post-processes.

I prefer generic design where graphic engine only consist of a RenderQue, RenderParamTracker and TokenProcessor (meshes, materials and vertex definitions have those). INI file has definition of render modes, render targets, render params, token rules, etc...

It's tiny and efficient and you don't have to touch engine code ever (if implemented properly)

Essentially something like this.

My current design seperates the engine's data from it's own.When The engine calls the renderer, the rendering logic does not need to know about the particulars of the actual engine's data. Instead, it's api recieves copies and transforms the data to a state that is needed.

When it comes time to render, the culling system will work independently of the current game state, it'll display a latent frame instead, and cull data based on what it has. This also means that the Renderer uses it's own octree for it's own processes. Primarily culling, But also as a way of determining some broader spectrum of LOD.

The engine's logic has it's own octree for logical processes. RayCasting, Scripts that effect certain regions of land, Navmesh Collisions, etc.

For oclussion culling I'm using view fustrum culling + Hi-Z algorithm (actually it's Lo-Z because of inverted z-buffer).

Main engine script does this:

SetRenderMode("PrepForHiZ"); // This will invoke setting of render targets, rendering quads, and after rendering is done, setting shader textures of those render targets, etc...

RenderOcclussionSpheres(); // this is main engine command - it keeps track of current objects (each has his own id and pos/radius)

void * flags = LockGraphicBuffer("oclussion_test_render_target", (X + 10)%10); // I'm having 10 frames delay (and 10 oclussion buffers)

SetOcclussionFlagsForObjects(flags);

Graphic engine doesn't even know that it did oclussion claculation. It doesn't understand what data sent to it means - only how to send it to graphic card.

That way it can be forward renderer, deffered rendered, forward+ rendered, ray tracer, <some new renderer that hasn't been invented yet>. It doesn't care what the data is - only how to render it efficiently.

As for token processor:

Rendering_technihnique_name = <Render Mode Name> + <Remaining Mesh Tokens> + <Remaining Material Tokens> + <Remaining Vertex Tokens>

SAMPLE: When rendering depth, there is no need to have tokens that have something to do with color. Then if all texture tokens are removed from material, Vertex token that represents texture coordinates will be removed (if there is no alpha mask texture). At the end you end up with only few tokens. Vertex token that represent NORMAL will probably also be removed (since it isn't even registered for RenderDepth render mode).

Once you know rendering tehnique name, you know which render parameters need to be sent to graphic card - and RenderParamTracker does that efficiently (without repetition).

The point is: there is no harcoding of anything graphic wise (shaders + graphic data come with the game files - not engine).

You write graphic engine once and then you don't tuch it for years.

If some new way of rendering/postprocess effect gets published you don't change the engine - you just stuff the shader into first game package and add a few lines in the INI file. Maybe add few SetRenderModes("blablabla") into rendering script for some new post-processes.

Never said the data was hard coded :P Only that the renrderer recieves copies of relevant data, but does not care for how the game engine manages it.

Currently, Rendering data is defined through json like Lua scripts. and is instantiated as called upon.

So yeah,

Renderer are pretty much always done through abstract interface (as demonstrated by Hodgman and Tangletail).

This is the abstract graphic interface in my case (used by main engine either through scripts or hardcoded):

SetRenderMode("bla bla"); // This will set render targets and prepare everything (and remove prev render targets and bind them to shader texture variables for the future use)

// Main engine will loop through visible objects and call these two commands (similar is done for light-shadow pass):

SetRenderParam(renderParamID, value); // camData or objPos or obj bones or some custom constant or whatever (done by object script).

RenderMesh(meshID, availableDesiredLODDistance); // This will add mesh to render que and calculate render stuff based on material/mesh tokens

That is it - It is very very abstract. I don't think it is possible to have it more abstract than that. There are also some callbacks that graphic engine will do to calc some required rendering parameter that was not set, but that's another story...

NOTE: Sometimes I assign some material to some geometry that are not compatible (defined by meshID). In that case graphic engine will issue a warning that it doesn't know how to render this particular combination of render_mode/mesh/material/geometry tokes. Then I have to add exception rule to the INI file or add a shader for that particular combination.

This topic is closed to new replies.

Advertisement