Opengl post processing

Started by
1 comment, last by 0r0d 11 years ago

When implementing post processing of ones 3d scene such as bloom or AO does one discard the rendered 3d scene and just use a rendered quad as what the final image that will be seen or does one overlay those affects and blend them with the actual rendered scene? I imagine there are multiple ways of doing this but i would just like to know what is typically the industry standard.

Thanks

J-GREEN

Greenpanoply
Advertisement

When implementing post processing of ones 3d scene such as bloom or AO does one discard the rendered 3d scene and just use a rendered quad as what the final image that will be seen or does one overlay those affects and blend them with the actual rendered scene? I imagine there are multiple ways of doing this but i would just like to know what is typically the industry standard.

Thanks

I draw my geometry into a framebuffer, than use that for all post-processing effects. once i'm done, i just draw a full screen quad with all my final effects.

Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.

hen implementing post processing of ones 3d scene such as bloom or AO does one discard the rendered 3d scene and just use a rendered quad as what the final image that will be seen or does one overlay those affects and blend them with the actual rendered scene? I imagine there are multiple ways of doing this but i would just like to know what is typically the industry standard.

Thanks

It depends on what operations you're doing in that final pass where you combine all your post-effects into a final buffer. If you're doing something simple like adding in bloom effects, then you could just blend them additively onto the existing buffer. Or, you might pass a pointer to that buffer as a texture into the final shader and render the final pixel color back to the buffer. This should be ok if you're just reading and writing only within that single pixel. If you're reading multiple texels around the target UV coordinates for a blur or whatever, then you'll be reading some before and some after the render operation, which wont be correct. Any such operation should happen beforehand into another buffer, which you will then source as a texture in the final shader.

This topic is closed to new replies.

Advertisement