Post-Process Framework

Started by
10 comments, last by Jason Z 12 years, 11 months ago
I'm working on a post-process framework at the moment and all is good so far for the simple case. The system is made up of 3 main classes:

ChainComponent: takes a texture as input, performs an operation, and outputs to a render target which can be passed as input to another component. An example would be converting input to monochrome, or part of the HDR pipeline (downscale, blur, etc)

EffectChain: contains numerous ChainComponents to form a complete effect. takes input texture and outputs result to render target to be used as input to another effect. Eg. HDR/Bloom, DOF

Manager: manages a list of EffectChains and renders them in order.

The problem I'm facing is if a Component requires the output of a previous Component that isn't the one being used as input or if it requires multiple inputs from previous components. A use case for this is the HDR pipeline where the Bloom stage requires the Downscaled texture and the Adapted Luminance.

It might just be because I've limited myself to a simple design where each component shouldn't really know about each other - but thought I'd ask on here to see how other people have designed their frameworks and hopefully provide me with some better ideas to make my system more flexible.
Advertisement
I suggest you take the opposite route - that is you go from the target and trace upwards through your sources. Something like:

[font="Lucida Sans Unicode"].. ChainComponent::Render(..)[/font]
[font="Lucida Sans Unicode"]{[/font]
[font="Lucida Sans Unicode"]source1->Render()
[/font][font="Lucida Sans Unicode"]source2->Render()
[/font][font="Lucida Sans Unicode"]..
[/font][font="Lucida Sans Unicode"]do your stuff
[/font][font="Lucida Sans Unicode"]..
[/font][font="Lucida Sans Unicode"]return result[/font]
[font="'Lucida Sans Unicode"]}[/font]



But be aware, that it's somewhat harder to control the number of temporary buffers required. Consider having temporary buffers in a pool..

Henning
I might be a bit confused but isn't what you're suggesting basically removing the need for a component and just moves all the components into one big slab of code in the EffectChain?

Eg.



.. EffectChainHDR::Render(..)
{
downscaleRT = RenderDownscale();
luminanceRT = RenderLuminance();
brightPassRT = RenderBrightPass();
..
..
..
return result
}

This would work but isn't as modular. It does however solve the problem of being able to get access easily to every RT from each stage of the effect and allows easier control over reuse of RT's.

Anyone have experience in designing a post process framework that worked well for them? I wouldn't mind knowing how some of the commercial game engines do this sort of thing and allow the end user to create custom effects.
At an old job, we had a similar system, but the "ChainComponents" were a bit more complex - implementing several steps to make up an effect.

For example, a "BloomComponent" might take a full-screen buffer as input, down-scale it to half resolution, blur it horizontally (requiring a 2nd half-res target), blur it vertically (writing back to the 1st half-res target), and then up-scale and compose (over the original full-scale input).

Each component implemented an interface that allowed the engine to inspect it's requirements of the render-target pool. For example, the above bloom component would return "1x full-res input/output, 2x half-res temporary" as it's requirements. The engine would ensure the render-target pool contained enough targets to satisfy the requirements of all of the components in the chain.


With these more complex components you often don't need them to communicate, but if different components do need to work together, there's no reason you can't link them at creation time, e.g.effect* luminanceCalc = new LuminanceCalcEffect();
effect* tonemap = new TonemapEffect();
tonemap->SetLuminanceInput( &luminanceCalc->GetLuminanceOutput() );
chain.Add(luminanceCalc);
chain.Add(tonemap);

At an old job, we had a similar system, but the "ChainComponents" were a bit more complex - implementing several steps to make up an effect.

For example, a "BloomComponent" might take a full-screen buffer as input, down-scale it to half resolution, blur it horizontally (requiring a 2nd half-res target), blur it vertically (writing back to the 1st half-res target), and then up-scale and compose (over the original full-scale input).

Each component implemented an interface that allowed the engine to inspect it's requirements of the render-target pool. For example, the above bloom component would return "1x full-res input/output, 2x half-res temporary" as it's requirements. The engine would ensure the render-target pool contained enough targets to satisfy the requirements of all of the components in the chain.


With these more complex components you often don't need them to communicate, but if different components do need to work together, there's no reason you can't link them at creation time, e.g.effect* luminanceCalc = new LuminanceCalcEffect();
effect* tonemap = new TonemapEffect();
tonemap->SetLuminanceInput( &luminanceCalc->GetLuminanceOutput() );
chain.Add(luminanceCalc);
chain.Add(tonemap);



As complex as it is, there was something nice about the directshow setup of "filters" that accept certain inputs that only link together if the output of one filter compatible with the input of another. It also made available splitters that would take a video frame and duplicate it so it could be used by multiple filters. At the start of every filter graph was a "source", where the image originated from.. and it ended with some type of renderer or in your case, the thing that will make available the final output in a format you like. What is also neat about that concept is if you applied it to shaders you could build a GUI that would allow an artist to place shader blocks that could be connected to each other via some type of connector and save them. They could quickly build up graphs that describe how a scene or effect can be created.

In order to allow a filter to rely on output from past filters you might place a split filter in the chain after one processing stage, then stall the pipeline at one particular filter until all inputs are made available. In directshow microsoft typically did this with a clocking signal that controlled the advancement of the pipeline. This would require you to abstract the concept of the effect filter and have a filter be able to decide whether to accept or reject a connection from another filter based on the input. Individual values in the filter that can be modified or animated would need a mechanism to do so.

Effectively this sounds very similar to the approach implemented by Hodgman.

At an old job, we had a similar system, but the "ChainComponents" were a bit more complex - implementing several steps to make up an effect.

For example, a "BloomComponent" might take a full-screen buffer as input, down-scale it to half resolution, blur it horizontally (requiring a 2nd half-res target), blur it vertically (writing back to the 1st half-res target), and then up-scale and compose (over the original full-scale input).

Each component implemented an interface that allowed the engine to inspect it's requirements of the render-target pool. For example, the above bloom component would return "1x full-res input/output, 2x half-res temporary" as it's requirements. The engine would ensure the render-target pool contained enough targets to satisfy the requirements of all of the components in the chain.


With these more complex components you often don't need them to communicate, but if different components do need to work together, there's no reason you can't link them at creation time, e.g.effect* luminanceCalc = new LuminanceCalcEffect();
effect* tonemap = new TonemapEffect();
tonemap->SetLuminanceInput( &luminanceCalc->GetLuminanceOutput() );
chain.Add(luminanceCalc);
chain.Add(tonemap);



See, what you're referring to as a "BloomComponent" I'd actually refer to as a "BloomEffect" as it is comprised of a number of sub components (downscale, blur h, bluv). This is why I got stuck because the sub components making up an effect required things from each other. Essentially you'd have a chain of effects and each effect would have a chain of components.

I think what I might have to do is not go as modular and simply have Effects which are more complex and implement several steps to form a complete effect - Like you mentioned in your first sentence. So you'd have one effect for the HDR/Bloom process, one for DOF, one for MSAA, etc.

I would like to make the system extensible though and have a way to be able to add new post process effects without touching the engine code. Do you think this is a good idea or should I stick to a strict set of effects? Have you worked on an engine where the post process effects were scriptable?

Thanks!


Many 3D packages these days use visual scripting in the form of node graphs to both create materials (i.e. shaders) and compositing chains (like the DirectShow system Michael mentioned). I'm taking inspiration from these (and other sources) to create my entire render pipeline system at the moment (not just the post processing part).

In this example, many simple 'components' (the sharp-edged rectangles) with their own inputs and outputs have been grouped together to form a more complex "bloom component" (on the lower right).
The yellow tabs are input/output pins of the components, and the green show the flow of execution (breadth-first traversal) that the game engine takes through the graph.
5807226150_410981db4e.jpg
This lets anything from post-processing, to deferred shading, to be configured via data files without touching the engine code.

A similar example would be the pipeline format from Horde3D, which achieves the same goal via XML config files (e.g. in Horde, convertomg Horde from deferred shading to inferred rendering, or adding DOF or other post-effects, only requires editing XML and GLSL files ;) ).

Many 3D packages these days use visual scripting in the form of node graphs to both create materials (i.e. shaders) and compositing chains (like the DirectShow system Michael mentioned). I'm taking inspiration from these (and other sources) to create my entire render pipeline system at the moment (not just the post processing part).

In this example, many simple 'components' (the sharp-edged rectangles) with their own inputs and outputs have been grouped together to form a more complex "bloom component" (on the lower right).
The yellow tabs are input/output pins of the components, and the green show the flow of execution (breadth-first traversal) that the game engine takes through the graph.
...
This lets anything from post-processing, to deferred shading, to be configured via data files without touching the engine code.

A similar example would be the pipeline format from Horde3D, which achieves the same goal via XML config files (e.g. in Horde, convertomg Horde from deferred shading to inferred rendering, or adding DOF or other post-effects, only requires editing XML and GLSL files ;) ).


Wow very cool. This makes me want to do a configurable pipeline :D Thanks for being so helpful. I'm going to have to have a good think about the entire renderer now and see if it's worth just making it all configurable.
I might make myself sound like a stubborn old-school guy, but I'm really not a fan of graph/data-based materials OR postFX systems. To achieve a particular visual effect with near-optimal performance on a particular platform often requires that you specifically implement something rather than implementing it through a series of generic nodes. I mean what happens when depth of field is downscale + blur + composite on one platform, on the SPU's on one platform, and in a compute shader that generates sprites that need to get rendered as a draw indirect call? I feel like you have to start breaking things down into larger opaque nodes, which starts to defeat the purpose. Not that I've implemented such a system and have the first-hand experience to know for sure, but it just seems that way based on the typical things I need to do lately for interesting postFX. :P

I might make myself sound like a stubborn old-school guy, but I'm really not a fan of graph/data-based materials OR postFX systems. To achieve a particular visual effect with near-optimal performance on a particular platform often requires that you specifically implement something rather than implementing it through a series of generic nodes. I mean what happens when depth of field is downscale + blur + composite on one platform, on the SPU's on one platform, and in a compute shader that generates sprites that need to get rendered as a draw indirect call? I feel like you have to start breaking things down into larger opaque nodes, which starts to defeat the purpose. Not that I've implemented such a system and have the first-hand experience to know for sure, but it just seems that way based on the typical things I need to do lately for interesting postFX. :P


How do you cope with making it flexible enough that the engine can allow the end-user to add/change effects? Do you just simply not allow this and have a set of hardcoded FX?

This topic is closed to new replies.

Advertisement