Sign in to follow this  

Post processing - approaches

This topic is 2045 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Guys is there any paper or art that describes possible approach techniques / ideas for implementing postprocessing techniques ?

I think of having a set of 'effects' like bloom, blur, motion blur, dof - and would like them to be applied at anytime during the game like:

[CODE]
m_pEngine->GetEffectsManager()->AddFX(CEffect::Bloom);
m_pEngine->GetEffectsManager()->AddFX(CEffect::DOF);
[/CODE]

What i think can be done is:

[CODE]

<<preprocessing rendering>>
- Render frame to textureA


<<postprocessing>>
- iterate through effects in std::vector
- for every effect
- bind textureA
- pass in the textureA as the sampler
- run effect shader on a quad with textureA bound
- store the results in textureA
- unbind textureA

<<final output>>
- render full screen quad with textureA bound

[/CODE]

This approach will in final return a texture based on a sum of all effects output that were in the std::vector.

Not bad, but will require me to add effects carefully as the position of the effect on the std::vector is very important.


How do you think?
Any better tested solution ? Edited by xynapse

Share this post


Link to post
Share on other sites
its not that easy... for bloom you have an input texture, you have to downsample it, next you have to blur it, which includes a horizontal and a vertical pass (if done separately)
for motion blur you probably want to store which pixel has what motion (2D vector) and blur accordingly.
for dof its the same thing, but you want to do this using the depth buffer to determine which pixels to blur to what extent.
You probably don't want to generalize this thing, but rather hard-code it and give each effect an option to render it or not, and give them parameters as well so you can control them.

Share this post


Link to post
Share on other sites
[quote]
- pass in the textureA as the sampler
- run effect shader on a quad with textureA bound
- store the results in textureA[/quote]
This is not possible. You can't read and write to the same texture at the same time. What you should do is store the results to the screen (the normal framebuffer). Bloom can be done that way
because it is added to the scene, depth of field you might have to take textureA and render it to textureB with DOF applied,
then take textureB and bloom it. Focus on the techniques and shaders, then you will have an idea on how to design your engine. Edited by dpadam450

Share this post


Link to post
Share on other sites
dpadam - hi again ;)

true, you're right but i can

[CODE]

// Imagine Blur here, first Blur Horizontal, second Blur Vertical

// Vertical gets input from Horizontal output in this situation..

for(effectN = 0; effectN < effects_size; effectN++)
{
pass the resulting texture of effectN-1 to effectN as the input.
}

then i can multiply the last effectN output texture with the original frame texture.
[/CODE]


Yes, this won't be a 'typical' blur as we're bluring in Vertical from Horizontal instead of original - but it can be tweeked a bit to achieve it.
And i believe few other effects can be also achieved with this approach.. Will have to check,

Anyways, seems like you say there is no good effects management solution around to have a look?

So when people code effects in their engines, how do they manage them ? Edited by xynapse

Share this post


Link to post
Share on other sites
[quote name='xynapse' timestamp='1336387669' post='4938029']
Guys is there any paper or art that describes possible approach techniques / ideas for implementing postprocessing techniques ?
[/quote]
I use a multiple pass system, where I can define per pass what is rendered to which render target.
Example:
[CODE]
<pass id="geometry" models="all" >
<render_target channel="0" id="image_buffer" />
<render_target channel="1" id="normal_buffer" />
</pass>
<pass id="bloom" models="full_screen_quad" >
<render_target channel="0" id="image_buffer" />
<render_target channel="1" id="normal_buffer" />
</pass>
<pass id="shadow" models="all" >
<render_target channel="0" id="depth_buffer" />
</pass>

[/CODE]
In a shader I can access the render targets as texture (refered by id). The benefit is, that you can mix post processing steps with geometry rendering passes (sometimes useful for deferred lighting, shadow mapping, geometry edge detection etc.).

Share this post


Link to post
Share on other sites

This topic is 2045 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this