Jump to content
  • Advertisement
Sign in to follow this  
xynapse

Post processing - approaches

This topic is 2349 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Guys is there any paper or art that describes possible approach techniques / ideas for implementing postprocessing techniques ?

I think of having a set of 'effects' like bloom, blur, motion blur, dof - and would like them to be applied at anytime during the game like:


m_pEngine->GetEffectsManager()->AddFX(CEffect::Bloom);
m_pEngine->GetEffectsManager()->AddFX(CEffect::DOF);


What i think can be done is:



<<preprocessing rendering>>
- Render frame to textureA


<<postprocessing>>
- iterate through effects in std::vector
- for every effect
- bind textureA
- pass in the textureA as the sampler
- run effect shader on a quad with textureA bound
- store the results in textureA
- unbind textureA

<<final output>>
- render full screen quad with textureA bound



This approach will in final return a texture based on a sum of all effects output that were in the std::vector.

Not bad, but will require me to add effects carefully as the position of the effect on the std::vector is very important.


How do you think?
Any better tested solution ? Edited by xynapse

Share this post


Link to post
Share on other sites
Advertisement
its not that easy... for bloom you have an input texture, you have to downsample it, next you have to blur it, which includes a horizontal and a vertical pass (if done separately)
for motion blur you probably want to store which pixel has what motion (2D vector) and blur accordingly.
for dof its the same thing, but you want to do this using the depth buffer to determine which pixels to blur to what extent.
You probably don't want to generalize this thing, but rather hard-code it and give each effect an option to render it or not, and give them parameters as well so you can control them.

Share this post


Link to post
Share on other sites

- pass in the textureA as the sampler
- run effect shader on a quad with textureA bound
- store the results in textureA[/quote]
This is not possible. You can't read and write to the same texture at the same time. What you should do is store the results to the screen (the normal framebuffer). Bloom can be done that way
because it is added to the scene, depth of field you might have to take textureA and render it to textureB with DOF applied,
then take textureB and bloom it. Focus on the techniques and shaders, then you will have an idea on how to design your engine. Edited by dpadam450

Share this post


Link to post
Share on other sites
dpadam - hi again ;)

true, you're right but i can



// Imagine Blur here, first Blur Horizontal, second Blur Vertical

// Vertical gets input from Horizontal output in this situation..

for(effectN = 0; effectN < effects_size; effectN++)
{
pass the resulting texture of effectN-1 to effectN as the input.
}

then i can multiply the last effectN output texture with the original frame texture.



Yes, this won't be a 'typical' blur as we're bluring in Vertical from Horizontal instead of original - but it can be tweeked a bit to achieve it.
And i believe few other effects can be also achieved with this approach.. Will have to check,

Anyways, seems like you say there is no good effects management solution around to have a look?

So when people code effects in their engines, how do they manage them ? Edited by xynapse

Share this post


Link to post
Share on other sites

Guys is there any paper or art that describes possible approach techniques / ideas for implementing postprocessing techniques ?

I use a multiple pass system, where I can define per pass what is rendered to which render target.
Example:

<pass id="geometry" models="all" >
<render_target channel="0" id="image_buffer" />
<render_target channel="1" id="normal_buffer" />
</pass>
<pass id="bloom" models="full_screen_quad" >
<render_target channel="0" id="image_buffer" />
<render_target channel="1" id="normal_buffer" />
</pass>
<pass id="shadow" models="all" >
<render_target channel="0" id="depth_buffer" />
</pass>


In a shader I can access the render targets as texture (refered by id). The benefit is, that you can mix post processing steps with geometry rendering passes (sometimes useful for deferred lighting, shadow mapping, geometry edge detection etc.).

Share this post


Link to post
Share on other sites
@Ashaman73 - thanks, looks like a nice idea - will have a closer look to a similar thing after the weekend.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!