Render framework components and tip?

Started by
0 comments, last by AxeGuywithanAxe 7 years, 9 months ago

Hey guys!

I want to start creating a bunch of real-time rendering demos. To do so, I thought having a common rendering framework would be of help, and I could just integrate it as a module if later I wanted to make a game engine.

I have a really basic understanding of OO principles, C++ and dabbled with OpenGL twice (my last project here). I know the rendering pipeline and have worked with different types of shaders, but that's it.

I could just start making stuff and refactor when I see I can reuse a portion of code, but if you could give me some pointing directions I would be very grateful. What components go into the making of a complete rendering engine/framework? Which are the most important ones? What things should I take into consideration while developing such a tool?

Advertisement

There are a few major components :

Shader System

API Abstraction

Renderer Core:

Scene Core

The shader system manages the compilation, parameter set up , permutation management , and customization of shaders for the rendering pipeline. There are many approaches to this, you can take a full text approach such as Unity , or take a hybrid approach like Unreal Engine and Source Engine. Unity's system uses a generic shader class that is compiled for each shader in your engine, this approach is fast for prototyping, but when it comes to runtime efficiency , is the slowest, you have to use string hashes/string look ups to bind parameters and etc. Unreal Engine and Source Engine use a hybrid approach, each shader in these engines have a shader file, and an associated class for each shader. For example, you may have a class called "CDirectionalLightPS" which handles computating of lighting for an engine. Because each shader has a specific set of parameters it requires, the main job of this shader class is to bind all of the parameters based on engine input, thereby circumventing the need to look up shader parameters each frame by string-hash/string-lookup. The benefit of a system like this is quick runtime performance, but you have to touch engine code when ever you make a shader class, which in some cases has it's own downsides.

The api abstraction makes it easier to transfer between different rendering backends, such as d3d11/d3d9/d3d12/opengl/metal.. etc... These are usually a set of classes such as VertexBuffers/IndexBuffers/Shaders/Contexts/Device wrappers/vertexdeclarations. What most engines do is , look at each resource that will be used by an engine, and create a base class, each backend will implement it's own version of the base class: i.e

class CGfxVertexBuffer;

class CD3D11VertexBuffer : public CGfxVertexBuffer{};

the renderer core handles shader selection/ render queues/ and rendering logic. This is where you will maintain such classes as; the deferred renderer, forward renderer, render queues..etc,.

class CSceneRenderer {};

class CDeferredRenderer : public CSceneRenderer{}

etc...

class CRenderQueue{}..

the scene core is where you manage well.. the scene. this is where all of your game entities will be. At the beginning of the frame you will query this system for all visible entities for each relevant view, and pass that data to the renderer engine to render all of the objects.

This topic is closed to new replies.

Advertisement