Jump to content
  • Advertisement
Sign in to follow this  
Sytto

How to support multiple rendering paths in my engine?

This topic is 977 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello there, first of all, I'm not sure if my question fits in this forum, if not, feel free to move it and sorry about it   smile.png.

 

So I'm doing a "game engine" as a project for a subject in my career. The teacher said it would be nice to support forward and deferred rendering at the same time, similar to what Unity does.

 

My question is. How would achieve this to be scalable, to support any rendering path in the future without having to change anything of the previous code? My teacher named a thing called a "composer" but I'm not familiar with this, I searched a lot but didn't find anything.

 

I don't need example code, just an advice on how to structure the engine correctly.

 

Thank you, your help is very much appreciated smile.png

Share this post


Link to post
Share on other sites
Advertisement

Good step is to separate low-level rendering layer (DirectX, OpenGL) from the higher one (deferred renderer, forward, light pre pass, etc.).
 
To achive this use of course objects inheritance. Create some interfaces like IGraphicsDevice, IPostProcess, IRenderer or IRenderTarget.
It depends on Your plans about the engine itself.
But it's always easier to use:

context->Clear(task->Buffers->GBuffer0, Color::Black);

than:

_commandList->ClearRenderTargetView(cpuHandle, color.Raw, 0, nullptr);

 
So I would defenetly define some base clases and basic rendering flow like:
prepare(collect scene data, culling, etc.) -> main scene rendering -> post proceses -> present -> do additional jobs (textures streaming etc.)

Share this post


Link to post
Share on other sites
You will need to make custom entry points for your shaders that take and return general data then have a deferred shader and a forward shader you compile with the other shaders in your code. This means you will compile each shader twice, once for forward shading, once for deferred shading.

General shader code
ShaderResult shaderMain()
{
  ShaderResult result;

  result.diffuse = ...;
  result.emmisive = ...;
  result.normal = ...;

  return result;
}

Forward Shading Code
void main()
{
  ShaderResult result = shaderMain();
  gl_FragColor = calculateLighting(result);
}
Deferred Shading Code
void main()
{
  ShaderResult result = shaderMain();
  gl_FragData[0] = result.diffuse;
  gl_FragData[1] = result.emmisive;
  gl_FragData[2] = result.normal;
}
Then in your deferred lighting pass
void main()
{
  ShaderResult lightingInput;
  lightingInput.diffuse = texture2D(diffuseTex, texCoord);
  lightingInput.emmisive = texture2D(emmisiveTex, texCoord);
  lightingInput.normal = texture2D(normalTex, texCoord);
  gl_FragColor = calculateLighting(lightingInput);
}
Then you need to have two different composite paths. You would choose which compiled shader to use for each object rendered depending on if you are using forward shading or deferred shading. Edited by HappyCoder

Share this post


Link to post
Share on other sites

Well, the best method would be to have a separate class for each rendering path type, i.e

 

class IRenderer

{
virtual void Render() = 0;

 

};

 

class CDeferredShading : public IRenderer

{

 void Render() override;

};

 

class CForwardShading : public IRenderer

{

public:

 void Render() override;

 

};

 

Because the rendering order defers vastly based on your rendering path; deferred shading, deferred lighting, standard forward, or forward plus, you'll have to modify some of your shaders; more specifically have the opaque/transparent shaders skip lighting computations and render to the gbuffer. I don't know the extent of what you meant by " to support any rendering path in the future without having to change anything of the previous code" , but no matter what change you make to your rendering path, you will have to modify code. This is unless you make a completely data driven renderer, but I doubt you want to take that route.

Share this post


Link to post
Share on other sites

The teacher said it would be nice to support forward and deferred rendering at the same time, similar to what Unity does.

 
Would be nice (and often necessary in the case of deferred rendering) but not important considering all of the other aspects of a game engine. But if you must...
 
First of all, all of that inheritance stuff is completely unnecessary. Also, shaders will be structured in an entirely different way, and same features will often require different approaches for the rendered images from both renderers to be equal - a basic deferred renderer probably won't have support for multiple materials, for example, so to add that, a "material ID" would have to be stored in the G-buffer (possibly the stencil buffer), and used as a pixel filter for multipass light rendering.
 
Some simple observations:

  • The forward rendering pipeline requires a "for each item {  render(item,ambient_lighting);  for each light {  render(item,light)  }  }" rendering order or similar
  • The deferred rendering pipeline requires a "for each item {  render(item)  }  for each light {  applyLight(light)  }" rendering order

The details vary but the point remains - you need a way to render all of the visible (or just all) items. Also, you need to render polygons that roughly describe the area occupied by the light for deferred shading.
 
So, you can achieve a lot with just two functions - DrawItems(items) and DrawTriangles(vertices) - used with the appropriate shaders and render targets at the right time. How you choose to call these functions is completely up to you. Might be as simple as RenderForward() and RenderDeferred(), a configuration variable or something else entirely.

 

Also, to start with simple code, you can just draw a fullscreen quad for each light - it will be slower (a lot slower with many lights) but guarantees effortless accuracy.
 
There are lots of optimizations/improvements to apply here (item ordering by transparency, depth sorting, global sorting with item span rendering, separate solid/transparent forward rendering passes, Z prepass etc.) but I'll leave those unexplained for now, since they're not really relevant for a school project type engine. You can always ask more questions to Google or in the forum when you're done implementing the basics.

Edited by snake5

Share this post


Link to post
Share on other sites

I agree with a lot of the posts here, but I have a few wrinkles I'd throw in.

First off, inheritance, (or how ever you decide to implement your interface), is great for the low-level graphics SDK abstraction.

Then I would design what information an entry in the scene needs, like the material it will need to use, and which shader it should use in either scenario.

Then I would design my structure for how to store the list of entries in the scene.

Then I would design two renderers that take in that list of scenes and implements the logic for rendering that list

Share this post


Link to post
Share on other sites

To allow for multiple pipelines in my engine, I've got:

  • A concrete GPU API wrapper, hidden behind an abstract interface: GpuDevice <- D3D11GpuDevice / OGL4GpuDevice / etc...
  • Shaders that can contain different techniques/passes (like the MS Effects framework, CgFX, glslfx, etc).
    i.e. A "Shader" is actually a collection of shader programs / entry points. When binding a shader, it will pick a different program/entry-point depending on whether you're drawing a shadow-map pass, a forward-opaque pass, a g-buffer pass, etc...
  • A scene to hold onto your objects and perform frustum culling (and optionally other kinds of culling).
  • The ability to traverse the scene and collect objects that are visible to a particular camera, and have a shader that can contribute to a particular pass (e.g. a hologram shader might have no shadow-map entry point, so would be collected during a shadow-map-camera's traversal).
  • An abstract 'pipeline' interface, which can consume collections of objects and draw that collections in the appropriate order to fill in some textures.

Pipelines can be chained together / be dependent on each other. E.g. a deferred rendering pipeline might make use of a shadow-map rendering pipeline.
This keeps the low-level parts of the engine agnostic to what kind of shading/lighting algorithms are being used. If I want to switch from deferred to forward, I create a new forward-rendering pipeline, and add a new entry point to my shaders. The GpuDevice classes and the scene classes are unaware of this, and are completely reusable.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!