Post processing in the rendering pipeline

Started by
6 comments, last by RobM 11 years, 2 months ago
I'm in tidy up and refactor mode today. I got my font engine working nicely which means I can display timings of various parts of the engine. One thing I wanted to do was to see how fast my post processing engine is and so I switched on my simple Gaussian blur post processing effect.

My rendering pipeline thus far is kinda simple but hopefully fairly scalable. My engine uses entities and components and so when I come to render, I go through each entity (no culling implemented yet) and ask the entity to give me back a vector of 'render tokens' - this list gets added to my overall list of render tokens.

The reason an entity could return more than one render token is that the model could have one or more materials and so it is one render token per mesh subset (i.e. material) - this works quite nicely and means when I sort my render tokens, all my like materials are grouped together for minimal state changes.

Once I have this list of render tokens, I sort them (currently only based on shader type/material) and pass them to the 'RenderFrame' method of my rendering manager. The rendering manager then renders each token and once that is complete, the post processing manager steps in and renders the post processing effect chain.

This works great so far but it's a bit limiting. Because my text is rendered by just creating an entity per line of text with a mesh component (a simple screen-aligned quad) and a render component, it is just treated like any other render token. This is where the limits come in. My Gaussian blur post processing effect blurred everything including my text.

So I guess I should introduce more bits into my sorting value (I've been basing this on the "Order your graphics draw calls around!" article on realtimecollisiondetection.net site) and by this I mean push the HUD render tokens to be after the post processing effects are rendered.

This leads me to an interesting design decision... Do I keep my post processing engine as it is, i.e., a self-contained manager that maintains its own full screen quad meshes, shaders, etc, and execute it in the rendering pipeline after some flag that says 'we're done rendering some stuff, do some post processing now, and now render some more stuff'...

Or do I change things drastically and make it more generic by making a post process effect just another entity. This/these could then be slotted into the render pipeline with a sort id bit that puts them in the correct place in the order. I can then have HUD entities with priorities that are after the post processing effect entities.

I think I prefer the second option as it feels more scalable - the only issue I can see is that some post processing effects need inputs from others, e.g. bloom, so I'd need to build that into the rendering engine somehow. Perhaps I should make the back buffer surface available as an input to each render token if it needs it...

Any thoughts or suggestions?

Thanks in advance
Advertisement

I'd personally go for the more drastic change. There are going to be some PP effects that you're going to want to render after text/HUD/GUI/etc (I personally implement brightness/gamma using PP these days as it plays nicer with windowed modes and doesn't screw up the user's desktop post-crash) so enabling that seems a good idea.

Regarding your last question, what I've done is implement a simple "render target manager" that handles switching of render targets and storing out the current one so that it can be grabbed for input to anything that needs it. It's not as robust or elegant as I'd like, but it suffices for my own purposes.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I handle rendering and post-processing by different "scenes". In this context, a HUD is its own scene (this gives way to 3D HUDs without worrying about intersection with world objects) and the game world is its own scene.

This is not the same as grouping different meshes by different shaders, because a rendered scene output is a flat image that can get layered on top of other renders. Each scene has its own "render path" to take, allowing different processes, like forward rendering with deferred rendering. The render path also has optional PP steps.

The way it works is, first, I clear the screen. Then, for each scene, render it through its render path. Each "step" in the path is a self contained rendering class with common properties being it can take meshes, cameras or render targets as inputs. Classes may be reused for different scenes. PP steps usually only take render target inputs, and 3D rendering steps take cameras and meshes. But they all output render targets. At the last step, output the final render target, and continue with the next scene. Each render path shares a pool of render target textures.

These renders are layered back to front so I would render the HUD last, with its own rendering technique. As the layers have a transparent background, scenes with some PP effects can blend infront of others. For instance, if you wanted to make some kind of "glow" shader for the HUD with blur and light bloom, the glowing edges will be visible on top of the render of the game world.

I think it would be overkill to have many separate layers of stuff being rendered (and could hurt performance with many screen-sized quads) but I see it being useful for separating a few scenes as being considered tied to different game components, since the HUD and game world operate under different logic and input rules.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

Thank you both. I really like the idea of separate renderers for each screen layer and, in fact, I was going to add 2 screen layer bits to my sorting mask but, as you've alluded to CC, I wouldn't be able to add post processing effects to each layer separately (I'm not sure I'd really need this for my game).

I started playing with the idea of a post processing effect being just another entity and because of the beauty of entity/component systems, got it up and running without too much hassle. I have decided to go with a separate render call for each layer as it makes things much easier and a bit less transparent. I will be keeping the post process effects as entities though as it works pretty nicely, they'll just be sorted to the end of the render tokens.

Thanks again.

Incidentally, if I have a shader technique with more than one pass, do I always have to feed the second one with the rendered results of the first? I'd have thought the underlying libraries would know that by default and somehow do it automatically. I use 2 passes with my Gaussian blur shader, horizontal and vertical, and my application code has to feed the results of the horizontal one into the vertical one. I guess the underlying system can't always know what we're trying to achieve....

Incidentally, if I have a shader technique with more than one pass, do I always have to feed the second one with the rendered results of the first? I'd have thought the underlying libraries would know that by default and somehow do it automatically. I use 2 passes with my Gaussian blur shader, horizontal and vertical, and my application code has to feed the results of the horizontal one into the vertical one. I guess the underlying system can't always know what we're trying to achieve....

Yes it can.
This is where you implement a node-based pluggable system where each node defines its named inputs (with system-provided “ColorBuffer” and “DepthBuffer”, maybe also “NormalBuffer” if you need, etc.) and named outputs.
An input name that is an empty string means “use the output from the previous stage if there was a previous stage, and otherwise use ColorBuffer”.

Basically you can chain together a bunch of post-processing nodes and name their inputs and outputs.
The nodes also define what resources they will need for their work, specifically what texture dimensions (in raw sizes or as a percent of the screen size) and formats they will use (these are named and optionally used as outputs).
The system pre-processes the nodes and can determine the minimum amount of resources to allocate based on how frequently resources can be reused between the post-processing phases etc.
During run-time the system runs over the nodes in order, putting their necessary resources into the necessary texture slots/framebuffer slots and executes the node’s virtual Process() function which sets its shader, sets data in the shader as per its needs, and does its job.


The system seems complex at first, but it is actually easier than performing a heart transplant on a rampaging elephant.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Thanks L Spiro, that's almost exactly what I already have - it's always good to know you're doing something right... ;)

Thank you both. I really like the idea of separate renderers for each screen layer and, in fact, I was going to add 2 screen layer bits to my sorting mask but, as you've alluded to CC, I wouldn't be able to add post processing effects to each layer separately (I'm not sure I'd really need this for my game).

That's not exactly correct. You would be able to add different effects to each layer, or different combinations of effects. Each layer just will not have an effect on each others' rendering pipeline.

Take this example where [SR] is a scene renderer and each [PP] is a post-process effect. They chain together (as a list or a directed tree) to perform the steps in order and produce the final output.

layer 1

[SR1]-->[PP1]-->[PP2]--> buffer

layer 2

[SR1]-->[SR2]--> buffer

In layer 1, a scene is rendered in just one step, and two post-process effects are added to that render. In layer 2, one scene is rendered, then a second scene (or it could be the same scene using a different shader), and SR2 depends on a render target created by SR1.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

That's not exactly correct. You would be able to add different effects to each layer, or different combinations of effects. Each layer just will not have an effect on each others' rendering pipeline.

Sorry, that's not quite what I meant but thanks for taking the time to clarify. What I meant was that if I were to naively add screen layers into my render sort mask, their bits would be near the high bit and the post process bit would be further down, meaning when you sort it would sort like this:

screen layer 1

render token 1

render token 2

render token 3 (bright pass)

render token 4 (blur - h & v)

render token 5 (composite)

screen layer 2

render token 6

render token 7

render token 8 (some nice PP effect on the HUD)

etc

So if this is just one long list, the second post processing set (render token 8) would post process over the first screen layer when it's not supposed to. I could do it this way, but just have each screen layer point render to its own screen layer surface then alpha blend the layers together afterwards. I thought full screen alpha blending was expensive though. Chances are, I won't need post processing effects on my HUD layer but it's something to think about if I do.

This topic is closed to new replies.

Advertisement