Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Post processing - approaches


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 xynapse   Members   -  Reputation: 151

Like
0Likes
Like

Posted 07 May 2012 - 04:47 AM

Guys is there any paper or art that describes possible approach techniques / ideas for implementing postprocessing techniques ?

I think of having a set of 'effects' like bloom, blur, motion blur, dof - and would like them to be applied at anytime during the game like:

m_pEngine->GetEffectsManager()->AddFX(CEffect::Bloom);
m_pEngine->GetEffectsManager()->AddFX(CEffect::DOF);

What i think can be done is:


<<preprocessing rendering>>
- Render frame to textureA


<<postprocessing>>
- iterate through effects in std::vector
  - for every effect
	- bind textureA
	- pass in the textureA as the sampler
	- run effect shader on a quad with textureA bound
	- store the results in textureA
	- unbind textureA

<<final output>>
 - render full screen quad with textureA bound


This approach will in final return a texture based on a sum of all effects output that were in the std::vector.

Not bad, but will require me to add effects carefully as the position of the effect on the std::vector is very important.


How do you think?
Any better tested solution ?

Edited by xynapse, 07 May 2012 - 05:10 AM.

perfection.is.the.key

Sponsor:

#2 Yours3!f   Members   -  Reputation: 1385

Like
0Likes
Like

Posted 07 May 2012 - 07:33 AM

its not that easy... for bloom you have an input texture, you have to downsample it, next you have to blur it, which includes a horizontal and a vertical pass (if done separately)
for motion blur you probably want to store which pixel has what motion (2D vector) and blur accordingly.
for dof its the same thing, but you want to do this using the depth buffer to determine which pixels to blur to what extent.
You probably don't want to generalize this thing, but rather hard-code it and give each effect an option to render it or not, and give them parameters as well so you can control them.

#3 dpadam450   Members   -  Reputation: 934

Like
0Likes
Like

Posted 07 May 2012 - 09:23 AM

- pass in the textureA as the sampler
- run effect shader on a quad with textureA bound
- store the results in textureA

This is not possible. You can't read and write to the same texture at the same time. What you should do is store the results to the screen (the normal framebuffer). Bloom can be done that way
because it is added to the scene, depth of field you might have to take textureA and render it to textureB with DOF applied,
then take textureB and bloom it. Focus on the techniques and shaders, then you will have an idea on how to design your engine.

Edited by dpadam450, 07 May 2012 - 09:24 AM.


#4 xynapse   Members   -  Reputation: 151

Like
0Likes
Like

Posted 07 May 2012 - 09:41 AM

dpadam - hi again ;)

true, you're right but i can


// Imagine Blur here, first Blur Horizontal, second Blur Vertical

// Vertical gets input from Horizontal output in this situation..

for(effectN = 0; effectN < effects_size; effectN++)
{
pass the resulting texture of effectN-1 to effectN as the input.
}

then i can multiply the last effectN output texture with the original frame texture.


Yes, this won't be a 'typical' blur as we're bluring in Vertical from Horizontal instead of original - but it can be tweeked a bit to achieve it.
And i believe few other effects can be also achieved with this approach.. Will have to check,

Anyways, seems like you say there is no good effects management solution around to have a look?

So when people code effects in their engines, how do they manage them ?

Edited by xynapse, 07 May 2012 - 10:03 AM.

perfection.is.the.key

#5 Ashaman73   Crossbones+   -  Reputation: 7876

Like
2Likes
Like

Posted 08 May 2012 - 02:40 AM

Guys is there any paper or art that describes possible approach techniques / ideas for implementing postprocessing techniques ?

I use a multiple pass system, where I can define per pass what is rendered to which render target.
Example:
<pass id="geometry" models="all" >
  <render_target channel="0" id="image_buffer" />
  <render_target channel="1" id="normal_buffer" />
</pass>
<pass id="bloom" models="full_screen_quad" >
  <render_target channel="0" id="image_buffer" />
  <render_target channel="1" id="normal_buffer" />
</pass>
<pass id="shadow" models="all" >
  <render_target channel="0" id="depth_buffer" />
</pass>

In a shader I can access the render targets as texture (refered by id). The benefit is, that you can mix post processing steps with geometry rendering passes (sometimes useful for deferred lighting, shadow mapping, geometry edge detection etc.).

#6 xynapse   Members   -  Reputation: 151

Like
0Likes
Like

Posted 11 May 2012 - 04:46 AM

@Ashaman73 - thanks, looks like a nice idea - will have a closer look to a similar thing after the weekend.
perfection.is.the.key




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS